
The fact that I cane keep track of my experiments easily:
- They are all on one place
- I can compare between runs
- I can log nearly everything I want even when tuning with optuna I can see all the nice metadata and visulizations.
- Review collected by and hosted on G2.com.
A few minor things:
- The fact that everytime I visit the page of my projects the colors for each run are different. There is an option to change the color but I do not want to have to change the color of each run especially when I am tuning a model which results in hundreds of runs.
- The stderr and stdout logs are unreadable especially when I am using progress bars.
- The optuna visualizations in full screen mode are quite small, especially the contours. I would expect that entering a full screen mode they would adjust according to my screen size and resolution. Review collected by and hosted on G2.com.
51 out of 52 Total Reviews for neptune.ai
Overall Review Sentiment for neptune.ai
Log in to view review sentiment.
I’ve found Neptune pretty easy to integrate. Although I don’t use it in the traditional way, I rely on it to track experiment evaluations and monitor the performance of our LLM-based applications. The tool is flexible and adapts to most of all types of tracking needs. Recently, I've had the need to map API call statuses, including error codes, and the process was seamless and fast.
Another super positive aspect is the incredible customer support. We have regular meetings, and the team is always willing to listen to our requests and translate them into practical solutions. They consistently share valuable resources and try to meet our needs. Review collected by and hosted on G2.com.
the only downside for me is the lack of customizability in the front-end. I’d love to have the ability to build tailor-made dashboards that better suit my specific data and provide a more intuitive visualization experience for team members with less technical expertise Review collected by and hosted on G2.com.
I've found Neptune to be a flexible, easy to use platform for tracking everything in our ML development process. The API is simple to set up and requires minimal code to track experiments. I've been loving the runs table, it makes it easy to group and filter experiments for quick comparisons.
Their support team has been great at responding to any questions and showing us new features as they are released. Review collected by and hosted on G2.com.
Dashboard visualizations automatically resize when you adjust the window size, meaning you need to resize and reorder the visualizations at that size. Would love if there was a way to fix the size of dashboards so this wouldn't happen. Review collected by and hosted on G2.com.

- Filter functionality on runs within a project
- Custom dashboards and saveable table schemas
- Reliably working on-prem deployment (GCP)
- Fast responded to questions and feature requests
- Logged content is organized under domains within runs Review collected by and hosted on G2.com.
- Grouping runs within a project is difficult, this makes usage for large teams challenging
- Plots look great already but miss some flexibility
- We can always have more advanced filter criteria :) Review collected by and hosted on G2.com.
Neptune Ai, allows to store practically any necessary data, based on its way of storing data and metadata, it allows complete traceability in a simple way.
It has an easy to use UI, which exposes, in a simple and useful way, practically all the necessary information, allowing some interesting customisations, such as being able to add/remove custom visualisations only with the necessary data/columns.
It also has a Python SDK which is quite easy to use, allowing to adapt to the use case in a quite flexible way, as Neptune provides an accessible data and metadata container, so integration with other tools is quite feasible.
On the support side, they are usually quite quick to respond and provide a lot of help. Another very interesting point is that the documentation is quite extensive and has a lot of examples. Review collected by and hosted on G2.com.
The great flexibility that Neptune AI provides as a container of data and metadata is both an advantage and a disadvantage, as it requires a good level of governance not to record too much information, or organise it in the right way, to be really useful. This is not a ‘problem’ of the tool itself, but it is something to be taken into account when using it in a use case of some complexity. Review collected by and hosted on G2.com.

The ease of use is super, just a couple of lines of code, and you are ready to track and log the metrics you want. Documentation and customer support are super efficient, and you can uploads tons of metrics. After a couple of connection issues (solved easily) we were able to integrate from different pipelines on azureML studio. Review collected by and hosted on G2.com.
We experienced some problems with connection stability and timeouts while uploading metrics, but everything was solved easily after a fast communication with customer support.
Also a slightly higher focus on LLM evaluation would be appreciated especially in these LLMs-centric days. Review collected by and hosted on G2.com.

Neptune just works for tracking experiments: it has nice and not overloaded UI that is quite fast (especially in the new version), you can track metrics and compare different runs. The team also provides great support and open to implementing new features on request. We use it for tracking all of our pretraining, post-training and evaluations. Review collected by and hosted on G2.com.
When you want to compare just final metrics of multiple runs as a table, that's something Neptune can do but the experience can be improved.
Also some features important to us are still missing, like being able to log steps of one run non-monotonically from different processes. Review collected by and hosted on G2.com.
Neptune helps track experiment data with a simple interface. The dashboard shows key metrics, graphs, and lets you compare different model runs side by side. Our team uses it to manage thousands of machine learning experiments, from initial training through fine-tuning and final evaluation. The system handles large amounts of data well and loads results quickly. You can organize experiments into projects, tag important runs, and easily search through past results. It also lets you log both training metrics and evaluation scores in one place. Review collected by and hosted on G2.com.
Overall, there's not much to dislike. Two helpful additions would be synced legends across multiple plots when hovering over a specific x-axis point, and the ability to automatically group related metrics together for better organization. Review collected by and hosted on G2.com.

We've been using Neptune for the past 4 months for our ML experiment tracking, with weekly or often daily use. Of similar tools, Neptune was the easiest to get started with and didn't require running a local server or setting up our own hosting. This has meant that we can seamlessly log experiments across multiple local machines and cloud compute alike. And for our current scale of use this is completely free!
We've found that it integrates well with sklearn (Python) and offers a couple of convenient ways to manage optimisation (e.g. grid search) experiments, which make up most of our current work.
The UI and documentation have been very intuitive to work with. And when we've hit issues, provided bug reports or given feedback, the team at Neptune have been very responsive. Having an informal real-time chat to contact them via has made us feel well supported at all times. Review collected by and hosted on G2.com.
As others have noted, flexibility in the schema of your tracked data is great on one hand but creates a couple of issues. For example, text and numeric versions of the same field can exist. And there is currently no versioning of the schema (for example, to track when logic behind creating a field is changed). This could be managed using an additional field for schema version, but it would be useful to have some verification that the expected schema version is being submitted.
Options for stricter schema versioning and typing would be a suggestion for the future :)
I'd also like to see options for more complex figures/plots, but am aware that this is in progress! Review collected by and hosted on G2.com.

It's just really easy to use and an amazing tool for keeping track of what you did, when you did it, and what the results were. For us it's an essential part of our research process across a multitude of AI products. The Neptune team is also super responsive and helpful, actually taking feature suggestions into consideration and quickly resolving any bugs or issues we've faced. Review collected by and hosted on G2.com.
I don't have any real disadvantages to speak of. Review collected by and hosted on G2.com.

Neptune AI covers all the needs of a Model Registry in an efficient way, thus, it can be a core part of the lifecycle of ML models within a platform.
It offers a user-friendly UI with which, in addition to being able to easily view the registered metadata associated with each execution, we can build more or less complex dashboards. On the user side, the API works in an easy, intuitive and flexible way, providing Data Scientists a simple way to interact with the projects from both script and Jupyters Noteboks.
About the installation, it is as simple as installing a Python package if you use the SaaS option. If you opt for something self-hosted it is a bit more complex but you will always have expert support.
Finally, the Neptune team always responds quickly, offering both mail support and call availability, although most of the time the doubts can be solved by consulting the documentation which is more than correct. Review collected by and hosted on G2.com.
As I have mentioned in the benefits, the use of the Neptune API is very flexible, perhaps excessively flexible if used in a corporate environment since we could find metadata structured in different ways in different projects.
If this is a problem, as a company you may have to develop custom components on top of the Neptune AI API to adapt it to your specific needs and standardize the metadata registry. Review collected by and hosted on G2.com.