Curated stories about engineering and science, revolving around time series datasets.
Technical articles, customer use cases and Marple news.
We are excited to announce that we are joining the InfluxData ecosystem. InfluxDB is the leading time series platform for storing test and measurement data. Marple can now be coupled to an InfluxDB database, giving engineers the power of both. In this blog post, we dive deeper into what the integration offers and how to set it up.
Engineers traditionally capture sensor data with loggers. Most of these devices store their data in files: csv, mat, tdms, hdf5, … But in recent years, we have seen our customers expand their workflows to also ingest it in time series databases, such as InfluxDB. Having the sensor output in a database gives a clean interface from which other tooling can consume the data. We believe this is a great step forward in opening up the data to web-based tooling for engineers.
But sensor data is a unique use case for InfluxDB. The data is time series, but it is often not one continuous stream of data. Measurements are frequently done as part of an ad hoc test or larger testing campaign. These tests typically have a start and end time and can be sampled as fast as 10 kHz. Visually analysing this kind of data asks for tooling that is highly interactive.
“We see engineering teams leaving behind file based workflows for storing sensor data. Time series databases provide a crucial role in this for opening up the data analysis ecosystem.” - Nero Vanbiervliet, CTO
Marple users that already have an InfluxDB instance can now benefit from a better workflow. The electric racing team of Delft is already streaming the data from their car to InfluxDB. In the past, they used to export from InfluxDB to a file, and import it in Marple. Today, they access their measurements directly in the Marple interface. Marple reads straight from InfluxDB, eliminating the need to store the data twice.
Marple expands your visualisation toolbox with things engineers need:
All these things are made with interactivity in mind, so Marple and InfluxDB will speed up the queries for you behind the scenes.
Marple also allows you to enrich datasets with metadata. One InfluxDB bucket can contain billions of sensor data points from different test and simulations. Engineers from Verhaert are using Marple to get their database more organised. They assign metadata such as a test number, who was responsible, and what equipment was used. In some cases they even attach a picture of the setup, or a pdf with a report.
If you want to see it for yourself, give it a try with a free account. Connecting your InfluxDB database should take you less than 10 minutes. Watch the video below, or read our getting started docs.
Last Thursday, we were thrilled to host Wouter Plaetinck, flight test engineer at Lilium, who talked about the ongoing projects happening at Lilium and how they integrated Marple as an essential tool in their flight test department.
Lilium is a leading company in the development of an electric aircraft. They are currently developing a vertical take-off and landing jet that can take onboard passengers. Currently, they are testing with Phoenix, a jet that is pilotted from the ground. For this, they perform many test flights at their facility in Spain. Lilium has been quickly growing in the past years; four years ago, they were a company with 100 employees, but now it has become a company where more than 800 people work. As a result, their test capabilities have also grown. Wouter explains:
“When I started, we were only able to conduct one flight test a week. We now improved to perform six tests a week and our next aim would be to perform more than one test a day”. - Wouter Plaetinck
You can imagine that each flight test generates quite a lot of data! Let’s dive deeper into all those data points.
Each test flight results in a collection of data points collected by all the sensors present on Phoenix. For example, analog sensors collect data about vibrations, temperature or load of the jet. Additionally, they also have air pressure sensors which collect data about airspeed and vanes which collect information about angle of attack and sideslip. Overall, a single test flight results in more than 10,000 parameters being recorded, which generates 0.5 GB of data per minute. All of this data is then added to a test database which contains approximately 15 TB. This is a huge amount of data; to give you an idea, 15 TB is equivalent to 3.75 million pictures taken with an iPhone 12. With this amount of data, system engineers and flight test engineers then request subsamples to conduct their analysis.
This is where Marple comes into play. Wouter showed a direct use case of Marple with data collected from a flight test. For example, he used Marple to find the wind speed at take-off and landing by superposing multiple datasets. He finally concluded the presentation by emphasising the importance of Marple in post-flight analysis because it “enables users to easily drag and drop files to do cross-time analysis, it requires no coding and makes the whole work process easier”. He acknowledged that the use of more traditional tools such as IADS was important for their real-time analysis during the flight, but that Marple was an essential tool in their daily post-flight work.Thank you to Wouter for coming over and showing us how Marple is invaluable for the Lilium Flight Test Team !
In this blog post, we explain why Marple is so good and fast at drawing plots of large amounts of time series data.
How do we visualise so many data points? The short answer is: we don’t.
We smartly select which data points to show, and give a small twist to how we show them.
Like this, we manage to quickly load plots that are accurate enough to analyse your data.
Data from sensors can quickly grow to large data sets.
People want to visualise this data but quickly realize that plotting millions of datapoints can be very slow.
It will cause your laptop or browser to freeze and you’ll have to select a small time range or be very careful what you want to plot.
We solve this problem by cutting in the data that is actually visualised.
A fraction of the data points gives more or less the same plot. And by rendering fewer data points, the software becomes faster.
When you cut in the data that you use to draw a plot, you need to answer two questions.
How many data points do we show
This mostly depends on the preference of the user.
Loading 2000 data points per plot renders a plot with a resolution that is of sufficiently high quality. This is the standard resolution we at Marple use.
Lower or higher plot resolutions also do the job, as shown below.
Which data points do we show?
Setting a fixed number of data points we render in a plot, boils down to reducing the number of data points that are shown.
This means that we need to decide which data points we show.
It makes sense to divide the time range of the plot by the number of data points we show. In each resulting fraction (we call them ‘buckets’), we will need to select one data point that we show.
Do we show the first, last or average?
The noisier a measured signal gets, selecting either the first or the last data point in the bucket may give a skewed representation of the measurement.
Here is an example of a plot that, in full resolution, has 22 data points.
If we want to scale down the number of data points by a factor of 4, we can subdivide the datapoints into 6 time buckets.
If we select the first data point in each bucket, we end up with this plot.
First data point per bucket selected
Since we work with really low resolutions, we need to accept that the image of the plot will be skewed.
What is more troublesome, is that the trend we see is quite different compared to when we would select the last data point in each bucket.
There are other solutions, such as drawing a data point in the middle of each bucket that is the average or the mean of all data points in that bucket.
You could also try taking the average or mean of the first and last data point of the bucket, or some other function of the data points in the bucket.
Whatever you try, it is hard to ensure that the plot will be a sufficiently good representation of the measurement.
What if we don’t actually show data points?
The problem seems that, when selecting just one data point per bucket, you throw out the high frequency of your measurement.
The solution to this problem is obvious and non-obvious at the same time: make the entire bucket a data point!
Technically speaking, we no longer deal with points, but areas.
The upper bound of the bucket area is determined by the data point in the bucket with the highest value. The lower bound of the area is determined by the lowest value in the bucket.
Granted, in a very low resolution, this obviously looks a little ridiculous. But the further we increase the number of buckets a plot consists of, the better this solution works.
We’re still just rendering a fraction of the data points that were actually measured, but we already see a plot that gives us a good feeling of the measured signal.
By way of final touch, we can connect the buckets in a smoother, more gradual way. As a result, the plot looks no longer clunky.
When placing a cursor in the plot, we still show an actual data point that corresponds with where the cursor is.
And when you zoom in enough so that the data points in your zoom level drop below your chosen resolution, we of course show all data points.
Plotting a lot of data points is hard to do in a performant way. That's why we don't do it.
Instead we do clever visual subsampling of the data in order to keep the plot performant.
In return you can really play with your data. This gives you the ability to intuitively explore your data and easily discover areas that are of further interest to you.
Just give Marple a try if you want to have a look at how this theory is put into practice! We’re always happy to hear what you think.
PID controllers: whether you are a control engineer or not, I am sure you have heard about it.
Due to their simplicity and robustness, PID controllers are one of the most popular control methods. They are being used in many different industries and many different applications. PID controllers can be (and need to be) tuned specifically for the application. Tuning will also impact the performance of the controller: how quick it responds, how much overshoot, how it reacts to vibrations, etc …
Of course, because they are so simple and popular, they are often misused and can lead to frustration during the tuning process. A lot of theory has been written about PID controllers, but in the end, you need to implement them in real life.
In this article, I want to give you a practical guide on how to use a PID controller and what I learned from my experience (I worked as a control engineer on drones, electrical race cars and Formula One). I will explain using an example where we will tune the cruise control of a road car.
PID is actually kind of an acronym. The controller consists of 3 components:
I would guess that 90% of all PID controllers in the world are actually just P-controllers. PI-controllers are also quite common, but the Derivative term is not commonly used. (Spoiler: if you need a Derivative term you probably have a problem with your system or design)
The flowchart represents the PID controller. Don’t focus on the details, the main takeaway is that there is an input, the 3 PID components and an output. The ‘Gains’ are the numbers we can tune.
PID controllers can only be used for linear systems. That means that there needs to be a linear relationship between the action of the PID controller and the variable you are trying to control. In very simple terms: higher action = higher output, lower action = lower output. There are cases where this is not true, think about systems with angles or quadratic systems.
PID controllers are actually quite simple, so they are usually also a good fit for simple systems. If a system is quite complex, you will have a bad time implementing a PID controller.
A PID controller to control the speed of a rotor? - Perfect!
A PID controller to control a double inverted pendulum? - think twice.
It is possible to use PID controllers in more complex systems, the trick is to linearize the system and you might need a combination of multiple PID controllers in series. But let’s not get into that.
As with many things in life, the first step is to think: “what would I do?”. This is also applicable when tuning a PID controller.
You are using a PID controller to solve some problem. What problem? Take a situation and think for yourself, what do I expect the controller to do?
Let’s use our cars cruise control example! If we are driving 80 km/h, but the cruise control is set to 100 km/h, what do you want the controller to do? It needs to accelerate the car! So it needs to add some throttle, maybe 20%. If we are driving 90 km/h, we also need to add throttle, but not as much as before, let’s say 10%. When we drive over the limit, let’s say 110 km/h, we do not want to accelerate anymore, maybe we even want to brake a bit. You could say -10% throttle (which is braking).
This step may seem silly, but it’s the basis of a good PID controller design! If you don’t understand the problem, your PID controller won’t understand it either and you won’t even know what is wrong with it.
As I mentioned at the beginning, 90% of all PID controllers are actually only P-controllers. The P-gain is therefore also a good one to start the tuning process. (PS: depending on your system, it can be interesting to set all gains to 0 and observe what happens to the system without any input).
To set the initial P-gain value, simply look back at step 1 and continue the reasoning. The input to the PID controller is usually the difference between the target and the actual value. With an initial guess of what output we want for a given input, we can calculate the P-gain. Looking at the table from step 1, the P-gain seems to be 1. This method will give you at least the correct order of magnitude for the P-gain.
Try out the initial gain using a step response in a simulated environment. How did that go? Next up you want to explore what happens if you make the P-gain 10x smaller and larger. This will give you a feel of what is too little, and what is way too much. If you are in a simulated environment, make sure you go over the limit and find out what value is too high.
If you notice that your controller goes to the setpoint, but never quite achieves it, that’s normal. It’s called the steady-state offset and we’ll fix it with the Integrator part of the PID.
The integral component of the PID controller is used to counter constant disturbances or offsets. Think about wind and friction. As the name suggests, the I-part will make use of an integral in order to determine the output. This makes things a bit more complicated.
Integral controllers can be quite dangerous as well. You should always bound the integral to prevent it from blowing up. I think about 25% of all the problems I have seen with PID controllers were due to badly capped integrals. Just make sure you bound it to whatever value is reasonable for your problem.
Look at the results from step 2. If you do not see a steady-state offset, you do not need an integral controller so you can leave the I-gain at 0. If you do have a steady-state offset we will need an integral controller. What I usually do is look at how big the steady-state error is and what the P-controller output is at that point. You also need to determine what a reasonable time range is for your integral controller to react. Then I do the following calculation to determine the initial I-gain.
In order to determine the bounds of the integral, you can simply take 5 times the expected error. If you want to be more strict you can do 2x or if you want to give more freedom to the controller you can do 10x.
Try out the initial gain and see what happens. Did it solve the steady-state error? Do you have an overshoot now?
If the initial gain did not solve the steady-state error, you probably need a higher I-gain. So feel free to make it 10x larger!
If you have massive overshoot and oscillations, you either need a smaller I-gain (larger I-gains can also solve it actually..) but most likely the bounds on your integral are too loose. If you are in a simulated environment you can quickly check what the integral value is and see if it blows up or not.
Note that the integral will always accumulate, even when the system is not in a steady state yet. There are more fancy methods than just bounding the integral, but personally, I feel like they create more problems than they solve. Have a look at ‘anti-windup’ if you are interested!
In my experience as a control engineer, I have never seen a successful PID controller with the derivative component. I have seen attempts, and I have seen them all fail miserably.
The idea of the derivative component is to create a damping or predictive effect. When your controller has a lot of overshoot, it will be very tempting to try and solve it with some D-gain. But actually, it just means there is something wrong with your controller or system.
The derivative part of the PID is very sensitive to noise and can usually only be tuned for one specific situation. In simulation, it might solve your problems for 1 specific case, but you’ll quickly run into trouble when implementing it in real life.
My advice: don’t use the D-component.
Create yourself a setup where you can easily (and safely!) test out different gains and different cases. This can often be achieved with a simple simulator. The simulator does not have to be an accurate model of reality, as long as the basic principles are the same you will already come a long way.
Make sure to log as much data as you can so you can see and understand what happens. Playing around in the simulated environment will create an understanding of the problem that will be useful when moving on to real-life testing.
Lastly, make sure you test different use cases. It is very tempting to tune your controller for only one situation but in reality, you encounter many different situations. Try to create a few different situations and check if your controller still performs as you expect.
Today we will tune the cruise control of a modern passenger car! In the article, I used the example of the cruise control already a few times. I made a simulated environment where we can try out different gains and see the effect.
The simulator has a very simple car model that uses a throttle % as an input. A PID controller will use the resulting speed and target speed to determine the amount of throttle needed. I also added a bit of a delay in the car's engine to make it a bit more difficult/realistic. Engines do not produce torque instantly.
The simulator can be found on the public repository https://gitlab.com/marple-public/marple-tutorials. For the visualizations, I made use of Marple and the python integration.
We actually did this exercise already in the explanation. If the car goes too slow, we need to add throttle. If the car is too fast, we need to brake. In practice you probably want to add some limits such that the cruise control won’t make your car go full throttle.
We start with an initial value of 1, as determined already in the explanation above. The resulting speed can be seen below. Blue is the actual speed, green is the target speed. This already looks quite good! We have a steady-state offset of about 12km/h, but that’s ok.
We can now try with a gain of 0.1 and 10 and see what happens. The results can be seen below, the colors now indicate the different gains. It can be seen that a gain of 10 responds quicker and that a gain of 0.1 is too little. You could argue that a gain of 10 is actually the best here, but given that it’s a passenger car I think this is a bit too aggressive. Let’s proceed with a gain of Kp = 2.
With a P-gain of 2 we have a steady state offset of about 6 km/h. At that point the P-action equals 12. We can determine the initial I-gain using the formula from above, we use 2 seconds as the time factor in the equation. This results in a I-gain of 1.
We also need to set a bound on our integral, using the equations above we get a bound of 60.
Simulating this results in the following response. There is a bit of overshoot, but the steady-state error has been solved!
The reason that we have overshoot is because of the integral that is building up in the first phase. We can set the bound a bit tighter in order to reduce the overshoot, this will however make it less robust for changing conditions. Using a bound of 20 you can clearly see the difference.
I also simulated without a bound to give you an idea of what happens… It’s all over the place.
Lastly, we still need to play around with the I-gain a bit to get a feeling if we are in the correct range. We can see that a gain of 10 does not work nicely and that a gain of 0.1 is not enough. So an I-gain of 1 was a good initial guess!
So far we have tested a constant step input, but what would happen if we created a more dynamic environment? Another popular way of testing a controller is using a sinusoidal input or multiple steps after each other. We can see that in both cases the controller performs quite OK. It obviously depends on the requirements for the car whether this performance is too quick or too slow.
I have given you a simple recipe to design and tune your PID (PI actually) controller. Use it for simple cases! When things get more complex, you will also need more complex solutions.
We created a cruise controller and applied the PID recipe, this worked quite well and we can be happy with the results.
I’m often frowned upon if I tell people that I like CSV files for storing time series data. The criticism is usually one of these three:
And this is true. Despite this, the power of CSV becomes apparent when looking at data analysis from the practical perspective of an engineer. It is fast, easy to read and has rich tooling.
Example: I have a test setup with 17 signals measured at 1kHz, giving me a 100MB file after measuring for 15 minutes. Asking for the data at row 500 000 only takes ±60ms.
This is certainly not a given for other file formats. Old MAT files for example, need to take the whole dataset into RAM. Only then can you extract the specific lines you need. XLSX files have a similar problem, where it is actually a ZIP file containing the actual data.
CSV is just a text file. So it’s easy to take a quick peek inside to see what it contains.
Usually you are using a script or data pipeline to process your data file. You rely on this to be able to make calculations or visualisations of the data. For some data files, this processing will crash.
Then it is important it’s up to you to understand why. Values might be missing, text scrambled, encoding wrong, … Looking at the raw data underneath often reveals this. Having an easy way to debug this yourself saves a lot of time.
CSV files are everywhere: there is a huge pool of free libaries, storage and tools available to use.
For scripting, you can get started in any language. Pandas (python), readmatrix (MATLAB) and readr (R) are examples of excellent libraries. Most IDE’s also have some syntax highlighting that makes it easier to interpret the columns.
Common data storage solutions can also import from CSV. Postgres has COPY, InfluxDB has write and SQLite has .import. Once your data is in a database, it opens up to tooling like Grafana and Marple.
After having a taste of CSV, you might still be concerned about the large file size or how to add metadata properly.
HDF5 might be what you are looking for. Beware that it adds complexity over CSV, but it gives you smaller file size and a more flexible structure. How to organise the data inside the file is up to you. So take the time to properly agree upon a structure across your company or team. If you do it well, reading specific parts of data from HDF5 might be faster than reading from a CSV.
Parquet is a second alternative. It is even better at compressing data than HDF5. Therefore we see engineering companies using it mainly for long term storage.
Both of these formats are also seeing adoption in libraries. For example, pandas has read_hdf and read_parquet and support for also writing parquet as of version 1.0.0 (2020).
The Agoria Solar Team is a team of KU Leuven students from various engineering studies who work together to create a new solar car every 2 years. With this solar car, we participate in international solar challenges against teams from all around the world. I am the race strategist in the team, and my function is to decide on the most optimal way to drive during te race, to achieve the best results.
To prepare for this, a lot of testing with the solar car is done beforehand. Testing = a bunch of solar car data! This data ranges from tire pressures, to every electric signal we measure from our battery pack and solar panel. Since this year, we partnered up with Marple, who provided us an easy way of analysing all of this data. The use of projects in Marple and the option to visualise databases dynamically, has made our life much easier. We use marple everyday for multiple applications: plotting the route of our next race with the map feature, checking solar panel data, weather predictions, specific motor controller data signals, efficiency measurements of our motor controller and much more.
~ Tine Wildiers - Agoria Solar Team
Marple haalt half miljoen euro op om ingenieurs razendsnel inzicht te geven in hun data.
ANTWERPSE TECH-STARTUP MARPLE HAALT HALF MILJOEN OP
De Antwerpse tech-start-up Marple heeft in een tweede investeringsronde 500.000 euro aan groeikapitaal opgehaald. De investeringsronde was gevuld na minder dan 1 maand door een gezonde interesse bij private investeerders. Ook Imec en VLAIO namen deel aan de investeringsronde. Het verse kapitaal moet Marple helpen het team uit te breiden en verder aansluiting te vinden op de markt van software voor R&D ingenieurs.
Marple helpt ingenieurs innoveren dankzij slimme software die data verwerkt en analyseert. “We zetten daarbij onder andere in op het snel en vlekkeloos omzetten van grote bestanden meetdata in glasheldere grafieken.” zegt Matthias Baert, een van de oprichters van Marple.
Idee vanuit de Formule 1
De twee oprichters van Marple zijn beide zelf ingenieurs. Zij stelden vast dat de software waarmee test- en controle-ingenieurs het op dit moment moeten doen, vaak de wensen overlaat.
Matthias draaide in het 2017 en 2018 seizoen mee als ingenieur bij het Formule 1 team van Mercedes. Hij stelde vast dat zelfs binnen de Formule 1, waar je de meest innovatieve tools zou verwachten, het aan goede software ontbrak om de data die van de wagens wordt verzameld efficiënt te organiseren en analyseren. “Dit was voor ons het signaal dat er nood is aan een betere oplossing”, zegt Matthias.
Door het databos de bomen zien
Productontwikkeling en productinnovatie worden in grote mate gestuwd door de ideeën en inzichten van ingenieurs. Met de laatste ontwikkelingen op vlak van sensoren en meetapparatuur kunnen ingenieurs grote hoeveelheden aan data te verzamelen.
“Tijdens een test levert elke milliseconde enorme hoeveelheden data op”, zeg Nero, mede-oprichter. “Dat is heel leuk voor de ingenieur, maar de uitdaging wordt dan om door het bos de bomen te blijven zien. Marple wil dat probleem aanpakken.”
Marple legt zich toe op drie elementen van het werkproces van de ingenieur: data management, data visualisatie en het verwerken van big data. De combinatie van de drie maakt Marple enorm krachtig.
AI, Machine Learning en Big Data?
Marple is voorzichtig op de vraag of hun software als AI moet worden gecatalogiseerd. "Uiteraard liggen verschillende slimme algoritmes aan de basis van wat we doen. Maar onze focus ligt in de eerste plaats op ingenieurs weer controle te geven op hun data, alvorens ons toe te leggen op het automatiseren met AI", aldus Matthias.
Wat gaat Marple doen met het nieuwe kapitaal?
Het verse groeikapitaal laat Marple toe in de eerste plaats zijn team uit te breiden, zowel aan de technische als aan de verkoop kant. De markt, vooral R&D, product ontwikkeling en labo-omgevingen, is zeer internationaal. Marple’s eerste klanten komen niet alleen uit België, maar ook uit Nederland en Duitsland. En daar stopt het niet “Ook in bijvoorbeeld Scandinavische landen merken we dat de markt groot is”, aldus Nero.
Over Marple
Marple werd opgericht in 2020 door Nero Vanbiervliet en Matthias Baert, en werd groot onder de vleugels van het imec.istart incubatieprogramma. Intussen heeft Marple zijn team verdubbeld, en willen ze de funding gebruiken om het product verder uit te werken en tegelijk zo goed mogelijk op de markt te plaatsen.
Marple raises half a million euro to give engineers lightning-fast insight into their data
ANTWERP TECH START-UP MARPLE RAISES HALF A MILLION EUROS
Antwerp-based tech start-up Marple has raised 500,000 euros in growth capital in a second investment round. The investment round was filled after less than one month due to healthy interest from private investors. Imec and VLAIO also participated in the investment round. The fresh capital will help Marple expand its team and further connect with the software market for R&D engineers.
Marple helps engineers innovate with smart software that processes and analyses data. "We focus on, among other things, the rapid and flawless conversion of large files of measurement data into crystal-clear graphs" says Matthias Baert, one of the founders of Marple.
Idea from Formula 1
The two founders of Marple are both engineers themselves. They noted that the software with which test and control engineers currently have to make do often leaves much to be desired.
Matthias worked as an engineer for the Mercedes Formula 1 team in the 2017 and 2018 seasons. He found that even within Formula 1, where cutting-edge technology is being developed, there was a lack of good software to efficiently organise and analyse the data collected from the cars. "This was the signal for us that there is a need for a better solution" says Matthias.
Missing the data forest through the trees
Product development and product innovation are largely driven by the ideas and insights of engineers. With the latest developments in sensors and measuring equipment, engineers can collect massive amounts of data.
"During a test, every millisecond generates enormous amounts of data" says Nero, co-founder. "That's great for the engineer, but the challenge then becomes to not miss the forest through the trees. Marple aims to address that problem."
Marple focuses on three elements of the engineer's work process: data management, data visualisation and big data processing. The combination of the three makes Marple immensely powerful.
AI, Machine Learning and Big Data?
Marple is cautious on whether their software should be catalogued as AI. "Of course, various smart algorithms are the foundation of what we do. But our focus is primarily on giving engineers control over their data again, before focusing on automation with AI," says Matthias.
What will Marple do with the new capital?
The fresh growth capital allows Marple to primarily expand its team, both on the tech and on the sales side. The market, especially R&D, product development and lab environments, is very international. Marple's first customers come not only from Belgium, but also from the Netherlands and Germany. And it doesn't stop there. "Also in for example Scandinavian countries we notice that there is a large market", Nero says.
About Marple
Marple was founded in 2020 by Nero Vanbiervliet and Matthias Baert, and grew under the wings of the imec.istart incubation programme. In the meantime, Marple has more than doubled its team, and they want to use the funding to develop the product further and at the same time place it as well as possible on the market.
Welcome again fellow Marpleans!
If you remember what we said a couple a couple of blogs ago we are supporting the Formula Student Team Delft (FSTD) with a sponsorship (aren't we generous)! For those who don't know, Formula Student is a worldwide competition for students to build and race their own electric race car.
Because the collaboration is such a succes we decided to make the trip from Antwerp to Delft and check out their workflow and show other Marpleans how cool it is to work with Marple.
It all starts with a good set-up. You want a high frequency logger in order to extract enough data and really get the finest details of your configuration. The FSTD boys and girls go above and beyond to achieve this. Even if this results in taping a laptop on top of the car. The basic logger in the car only logs at 250 Hz and to test the motorcontroller they needed at least an 8kHz logger.
Safety jackets on, let's go! Look, I know you're here for data and data analysis but I won't deprive you of this cool driving footage either. The test in question was so they could finetune the motorcontroller. This means driving around some cones on the parking lot was enough!
Then what? I'll let Andrea, the George Clooney of control systems tell you himself:
Data visualisation in Marple.. Oh What A Dream! It took more time walking to the office than having the datasets in Marple. In fact, because they work with our infamous API, as soon as his laptop connected to the internet. The upload of the dataset to the Marple data management system started and there was no waiting time for them to start analysing the data. I say them because as you know Marple allows your whole team to simultaneously look at the data.
After quickly finding the possible improvements, they made a couple of tweaks and went to the track.
Good times in Delft!
That's it from us, hope you enjoyed it. If you want to know more about the use case please contact us about it!
Cheers,
The Marple team
Beloved Marple-enthusiasts,
We're back with another blog post, this time on your request!
Two weeks ago we asked you what kind of data you wanted us to visualise with Marple.
Cycling came out as a clear winner. Convenient! We had already planned to kick-off the Marple summer in Spain for a work-from-where-the-sun-shines week. Now we had an excuse to also bring our bikes.
The main reason we cycle is obviously to impress our friends and ex-girlfriends on Strava. Kidding not kidding, Strava is a cool application to gain insights in your ride data and compare it with friends.
Strava also estimates the average power output of your ride in watts. This is an ambitious estimate, because next to speed, weight and road gradient (values Strava typically knows), there are multiple other variables that influence power that Strava does not have information about. Think about weather conditions, the aerodynamics of your bike position, whether you are drafting behind your friends (yeah you, Matthias), etc.
As Strava's estimated power output has been at the heart of quite a few discussions within our team before, we decided to put this estimate to a test.
We planned a route, rode our bikes, and uploaded the ride to Strava.
Idris rode the ride with a standard power-meter. Before uploading the ride to Strava, however, we cut the power data from the file.
Then we uploaded the ride data (without the power data) to Strava. Second, we fetched Strava's estimates of Idris' power output from Strava (we found this data easily in JSON-format in the Network section of the developer tools of our browser).
After converting this data to csv-file format (using a plain and simple custom python script), 99% of the work was done. In the blink of an eye, Marple parsed the csv-file to its databases and in one click we immediately had a very clear view of how the actual power output compared to Strava's estimation.
Setting both metrics to the same scale shows that Strava's estimation is doing a pretty impressive job.
We noticed that Strava's estimated power output is quite noisy. No problem for Marple. With a click on a button, we applied a moving-average to
this data, creating a signal that tells a much clearer story.
By organising our data in different workbooks, and adding different metrics to those workbooks, we gain further insights under the hood of our data.
For instance, we noticed that when the gradient increases, Strava mostly underestimates the actual power output. The other way around, downhill Strava mostly overestimates the power output. As these two tendencies cancel each other out in a roundtrip as ours, Strava's overall estimate is still pretty accurate.
Are there any insights we are missing?
WIth love from Spain,
The Marple crew
Hola los Marple aficionados!
We at Marple like fast cars. More so, we love the engineers who design them!
That's why we're really excited to announce that Marple is working together with the beautiful students of the Formula Student Team Delft (FSTD).
FSTD is designing an electric race car from scratch. Yes, that's as cool as it sounds!
They hope to collect prizes this summer at prestigious racing events in the Netherlands and Germany. Marple helps them to get on top of their test data, showing (visualising!) them the way to an efficient design process and hopefully victory in the races they are participating in.
True Marple insiders will know that Marple's founder and chief MBaerto used to be a member of the FSTD racing team when he was a student in Delft. This makes this collaboration even more special.
Are you also designing an electric race car from scratch?Or building something else Marple sounds useful for?
Make sure to reach out!
Much love a todos,
The Marple team
Hi there!
We have some exciting news again, this time about our product. We are moving from a desktop solution to a server based solution and make Marple a web tool.
A server based approach has many benefits:
We will soon launch a demo version online so you can experience the advantages Marple brings. For now, enjoy the short demo video below:
Do you want more information about our server product? Make sure to reach out!
The Marple team.
Hello there!
We've had a busy couple of months since our last blog post. In this post we want to give you an update on two elements: our second test period and our first summer interns. Let's go!
We've had a very successful second test period in May-June with more than 100 users testing our tool and providing valuable feedback. In total almost one hundred billion data points were analyzed by Marple during this period. That's amazing. We've seen new use cases that we find very interesting and open a new range of opportunities. We're currently brainstorming about our product to see if we can capitalize on these opportunities. But more on that in our next blog post.
Our team is expanding! Since the beginning of July we've had Flor and Liesbeth joining our team as summer interns as part of their study. Flor is improving the data handling of Marple by adding support for more data types as well as improving the internal data structure. Liesbeth is researching how we can improve on our current data subsampling techniques. They've only recently joined our team but are already making valuable contributions to Marple. It's great to have some extra (wo)man power in our team!
Of course, a team is only a team once there is a team photo, so we lined up on a nice piece of grass.
Cheers and see you next time!
Matthias & Nero
PS: Mandatory jumping picture as a bonus (Flor almost went into orbit on this particular attempt)
We are very happy to announce that Marple has joined the imec.istart accelerator programme! Imec.istart is a startup accelerator in Belgium with a focus on tech startups. Therefore, it is a perfect fit for our company. The accelerator is a branch of IMEC, a renowned research center with a focus on microelectronics.
Imec.istart will support us in various ways including funding, coaching & mentoring, workshops and access to its large network. To top it off, we will be moving our offices to a co-working space in Antwerp. We look forward to joining a community of fellow startups when the corona dust has settled down.
Matthias & Nero