StageDead | Dead
About Primary Data
Primary Data automates the flow of data to ensure the right data is in the right place at the right time across enterprise infrastructure and the cloud to meet evolving application demands with its DataSphere platform. The storage and vendor-agnostic DataSphere architecture is based on a metadata engine that automatically moves data to the most appropriate resource to meet data requirements without application interruption. DataSphere helps enterprises overcome performance bottlenecks, integrate with the cloud for savings and active archival, and easily adopt new resources from any vendor. DataSphere enables customers to reduce overprovisioning by up to 50 percent, generating savings that easily run into the millions for enterprises operating at petabyte scale.
Missing: Primary Data's Product Demo & Case Studies
Promote your product offering to tech buyers.
Reach 1000s of buyers who use CB Insights to identify vendors, demo products, and make purchasing decisions.
Missing: Primary Data's Product & Differentiators
Don’t let your products get skipped. Buyers use our vendor rankings to shortlist companies and drive requests for proposals (RFPs).
Research containing Primary Data
Get data-driven expert analysis from the CB Insights Intelligence Unit.
CB Insights Intelligence Analysts have mentioned Primary Data in 1 CB Insights research brief, most recently on Apr 24, 2023.
Primary Data Patents
Primary Data has filed 1 patent.
Computer storage devices, Computer data storage, SCSI, Data management, Computer buses
Computer storage devices, Computer data storage, SCSI, Data management, Computer buses
Latest Primary Data News
Apr 25, 2023
| Amazon Web Services This is a guest post by Valdiney Gomes, Hélio Leal, and Flávia Lima from Dafiti. Data and its various uses is increasingly evident in companies, and each professional has their preferences about which technologies to use to visualize data, which isn’t necessarily in line with the technological needs and infrastructure of a company. At Dafiti , a Brazilian fashion and style e-commerce retailer, it was no different. Five tools were used by different sectors of the company, which caused misalignment and management overhead, spreading our resources thin to support them. Looking for a tool that would enable us to democratize our data, we chose Amazon QuickSight , a cloud-native, serverless business intelligence (BI) service that powers interactive dashboards that lets us make better data-driven decisions, as a corporate solution for data visualization. In this post, we discuss why we chose QuickSight and how we implemented it. Table of Contents Why we chose QuickSight We had specific requirements for our BI solution and looked at many different options. The following factors guided our decision: Tool close to data – It was important to have the data visualization tool as close to the data as possible. At Dafiti, the entire infrastructure is on AWS, and we use Amazon Redshift as our Data Warehouse. QuickSight, when using SPICE (Super-fast, Parallel, In-memory Calculation Engine), extracts data from Amazon Redshift as efficiently as possible using UNLOAD , which optimizes the use of Amazon Redshift. Highly available and accessible solution – We wanted to be able to be access the tool by web or mobile interface, in addition to being able to do almost anything through API calls. Serverless solution – All the other data visualization solutions that were used at Dafiti were on premises, which created unnecessary cost and effort to maintain these services, taking the focus away from what was most important to us: data. Flexible pricing model – We needed a pricing model that would allow us to provide access to everyone in the company and at a price defined by usage and not by license. Thanks to AWS pay-as-you-go pricing, with more than double the number of users we had on our previous main data visualization solution, our cost with QuickSight is about 10 times lower. Robust documentation – The material provided by AWS proved to be helpful, allowing our team to put the project into production. Unifying our solution We were previously using Qlikview, Sisense, Tableau, SAP, and Excel to analyze our data across different teams. We were already using other AWS services and learning about QuickSight when we hosted a Data Battle with AWS, a hybrid event for more than 230 Dafiti employees. This event had a hands-on approach with a workshop followed by a friendly QuickSight competition. Participants had to get information in their own dashboard to answer correctly. This 5-hour event flew by, accelerated the learning path of technical and business teams, and proved that QuickSight was the right tool for us. QuickSight has brought all of our teams into one tool, while lowering costs by 80% and enabling us to do so much more together. Currently, over 400 employees, including our CEO, across nine different business units are using QuickSight as their sole source of truth on a daily basis. This includes human resources, auditing, and customer service, which previously had their analyses spread across several sources. Data democratization Data democratization is one of Dafiti’s main objectives. We believe that allowing everyone to analyze the data, following Brazilian, Argentinean, and Colombian privacy laws, unlocks potential for improving decision-making processes by extracting value from the data generated by the company. However, the democratization of data comes with the responsible use of resources. Yes, we want all users to be able to access and extract value from the data, but the cost can never be greater than the value that this generates. How we organized the project Data democratization drives Dafiti’s strategy. When implementing QuickSight, the obsession of becoming an even more data-driven company (we talk about this at the AWS Summit SP 2022 ) and having data increasingly accessible was what guided the project. We organized QuickSight by folders, as can be seen in the following figure, and each folder represents a business area. This makes it easier to grant access and ensures that all people from the same area have access to exactly the same set of data and reports. In this model, people from the corporate data area can view and edit any resource from any area, while customer service users can view and edit resources only for customer service. Expanding the model a bit, the reports created by one area can be shared with others, as can be seen in the following figure, in which the SAC report was shared with Support, creating what we call a reporting portfolio. In this way, all users who join any of the groups will have exactly the same view as any of their peers, eliminating privileges in accessing data. In addition, the portfolio is enriched every day with reports that are created and maintained by other areas, but which may be of interest to areas other than the one responsible for creating it. For this to work correctly, a certain rigidity is necessary in relation to the few naming and documentation standards that have been defined. On the other hand, designers have complete freedom to define the characteristics of their reports. Another highlight in this model is that no report can be shared directly with a specific user; this restriction was defined using custom permissions in QuickSight. Therefore, the reports are always shared only through the folders. After all, we want the data to be accessible equally to everyone in the company. Technical configurations QuickSight offers a comprehensive API, and all the activities we carry out on a daily basis take place through these APIs. Among these activities, we highlight the granting of access and the monitoring of various aspects of the tool. The QuickSight visual interface allows most of the tool’s maintenance activities to be performed and integration with Active Directory or the use of AWS Identity and Access Management (IAM) users is possible, but we understand that it wouldn’t be the ideal choice to grant access. Therefore, we defined an access grant flow for users and groups based on the QuickSight API , as can be seen in the following figure. In this model, the creation and removal of users is done through a JSON file with the following structure: "Version":"1.0.0", "Namespace":"default", "AwsAccountId":"<AwsAccountId>", "AwsRegion":"<AwsRegion>", "Permission": "GroupList":[ "GroupName":"QUICKSIGHT_DATA_EDITOR", "GroupName":"QUICKSIGHT_DATA_VIEWER", "GroupName":"QUICKSIGHT_DATA_DESIGNER", "GroupName":"QUICKSIGHT_SAC_VIEWER", "GroupName":"QUICKSIGHT_SAC_DESIGNER", ... ], "UserList":[ "UserName":" [email protected] ","Active":"True","GroupList":["GroupName":"QUICKSIGHT_DATA_EDITOR"], "UserName":" [email protected] ","Active":"True","GroupList":["GroupName":"QUICKSIGHT_SAC_VIEWER"], ... ] Whenever a user needs to be added or changed, the file is edited and a pull request is submitted to GitHub. If the request is approved, an action is triggered to send the file to an Amazon Simple Storage Service (Amazon S3) bucket. From this, an AWS Lambda function is triggered that performs two activities: the first is the maintenance of users and groups, and the second is the sending of an invitation through Amazon Simple Email Service (Amazon SES) for users to join QuickSight. In our case, we opted for a personalized invitation model that would emphasize the data democratization initiative that is being conducted. To monitor the tool, we implemented the architecture shown in the following figure, in which we used AWS CloudTrail to pull out the QuickSight logs and the QuickSight API to extract information from the tool’s resources, such as reports, users, datasets, data sources, and more. All of this data is processed by Glove, our data integration tool, stored in Amazon Redshift, and analyzed in QuickSight itself. This allows us to understand the behavior of our users and concentrate efforts on the most-used resources, in addition to allowing optimal cost control and the use of SPICE. To update the datasets, we don’t use the QuickSight internal scheduler, due to the large volume of data and the complexity of the DAGs. We prefer updating the datasets within our ETL (extract, transform, and load) and ELT process orchestration flow. For this purpose, we use Hanger, our orchestration tool. This approach allows the datasets to be updated only when the data source is changed and the data quality processes are executed. This model is represented by the following figure. Conclusion Choosing a data visualization tool is not a simple task. It involves many considerations, and several aspects must be analyzed in order for the choice to fit the characteristics of the company and to be consistent with the profile of business users. For Dafiti, QuickSight was a natural choice from the moment we learned about its features. We needed a service that was in the same cloud as our main data sources, extremely fast using SPICE, and solved the maintenance and cost problem of on-premises applications. In terms of functionalities that are necessary for our business, it met our needs perfectly. Do you want to know more about what we are doing in the data area here at Dafiti? Check out the following videos: About the Authors Valdiney Gomes is Data Engineering Coordinator at Dafiti. He worked for many years in software engineering, migrated to data engineering, and currently leads an amazing team responsible for the data platform for Dafiti in Latin America. Hélio Leal is a Data Engineering Specialist at Dafiti, responsible for maintaining and evolving the entire data platform at Dafiti using AWS solutions. Flávia Lima is a Data Engineer at Dafiti, responsible for sustaining the data platform and providing the data from many sources to internal customers. For more updates check below links and stay updated with News AKMI .
Primary Data Frequently Asked Questions (FAQ)
When was Primary Data founded?
Primary Data was founded in 2013.
Where is Primary Data's headquarters?
Primary Data's headquarters is located at 4300 El Camino Real, Los Altos.
What is Primary Data's latest funding round?
Primary Data's latest funding round is Dead.
How much did Primary Data raise?
Primary Data raised a total of $103M.
Who are the investors of Primary Data?
Investors of Primary Data include Accel, Battery Ventures, Pelion Venture Partners, Lightspeed Venture Partners, Mercato Partners and 5 more.
Who are Primary Data's competitors?
Competitors of Primary Data include SimpliVity.
Compare Primary Data to Competitors
Scale Computing is a hyper-converged infrastructure company that focuses on small and mid-size businesses with limited IT resources. It develops an IT virtualization infrastructure platform that integrates storage, servers, and virtualization software into an all-in-one appliance-based system. It was founded in 2007 and is based in Indianapolis Indiana.
Tintri (NASDAQ: TNTR) provides artificial intelligence (AI) powered data management solutions. It helps information technology (IT) organizations to focus on virtualized applications and business services. The company offers databases, data protection, data recovery, and more. It was founded in 2008 and is based in Mountain View, California. In September 2018, Tintri was acquired by DDN at a valuation of $60 million.
VMware (NYSE: VMW) is a cloud computing and virtualization technology company that enables organizations to build, run, manage and secure their apps across clouds. It serves the banking, healthcare, government, retail, telecommunications, manufacturing, and transportation industries. The company was founded in 1998 and is based in Palo Alto, California.
NetApp (NASDAQ: NTAP) creates innovative storage and data management solutions that help accelerate business breakthroughs and deliver outstanding cost efficiency. Customers around the world choose the company for their "go beyond" approach and broad portfolio of solutions for cloud computing, flash storage, business applications, data storage for virtual servers, disk-to-disk backup, and more. NetApp solutions provide nonstop availability of critical business data and simplify business processes so customers can deploy new capabilities with confidence and get to revenue faster than ever before.
Actifio develops multi-cloud data management software. It replaces siloed data management applications with a simple, application-centric, service-level agreement (SLA)-driven approach. It was founded in 2009 and is based in Waltham, Massachusets. In December 2020, Actifio was acquired by Google.
Pure Storage, the all-flash enterprise storage company, enables the broad deployment of flash in the data center. When compared to traditional disk-centric arrays, Pure Storage all-flash enterprise arrays are 10x faster and 10x more space and power efficient at a price point that is less than performance disk per gigabyte stored. The Pure Storage FlashArray is ideal for high performance workloads, including server virtualization, desktop virtualization (VDI), database (OLTP, real-time analytics) and cloud computing. The company was founded in 2009 and is based in Mountain View, California.
Discover the right solution for your team
The CB Insights tech market intelligence platform analyzes millions of data points on vendors, products, partnerships, and patents to help your team find their next technology solution.