Settings Results in 4 milliseconds

Performant on-device inferencing with ONNX Runtime
Performant on-device inferencing with ONNX Runtime

As machine learning usage continues to permeate across industries, we see broadening diversity in deployment targets, with companies choosing to run locally on-client versus cloud-based services for security, performance, and cost reasons. On-device machine learning model serving is a difficult task, especially given the limited bandwidth of early-stage startups. This guest post from the team at Pieces shares the problems and solutions evaluated for their on-device model serving stack and how ONNX Runtime serves as their backbone of success.Local-first machine learningPieces is a code snippet management tool that allows developers to save, search, and reuse their snippets without interrupting their workflow. The magic of Pieces is that it automatically enriches these snippets so that they're more useful to the developer after being stored in Pieces. A large part of this enrichment is driven by our machine learning models that provide programming language detection, concept tagging, semantic description, snippet clustering, optical character recognition, and much more. To enable full coverage of the developer workflow, we must run these models from the desktop, terminal, integrated development environment, browser, and team communication channels.Like many businesses, our first instinct was to serve these models as cloud endpoints; however, we realized this wouldn't suit our needs for a few reasons. First, in order to maintain a seamless developer workflow, our models must have low latency. The round trip to the server is lost time we can't afford. Second, our users are frequently working with proprietary code, so privacy is a primary concern. Sending this data over the wire would expose it to potential attacks. Finally, hosting models on performant cloud machines can be very expensive and is an unnecessary cost in our opinion. We firmly believe that advances in modern personal hardware can be taken advantage of to rival or even improve upon the performance of models on virtual machines. Therefore, we needed an on-device model serving platform that would provide us with these benefits while still giving our machine learning engineers the flexibility that cloud serving offers. After some trial and error, ONNX Runtime emerged as the clear winner.Our ideal machine learning runtimeWhen we set out to find the backbone of our machine learning serving system, we were looking for the following qualitiesEasy implementationIt should fit seamlessly into our stack and require minimal custom code to implement and maintain. Our application is built in Flutter, so the runtime would ideally work natively in the Dart language so that our non-machine learning engineers could confidently interact with the API.BalancedAs I mentioned above, performance is key to our success, so we need a runtime that can spin up and perform inference lightning fast. On the other hand, we also need useful tools to optimize model performance, ease model format conversion, and generally facilitate the machine learning engineering processes.Model coverageIt should support the vast majority of machine learning model operators and architectures, especially cutting-edge models, such as those in the transformer family.TensorFlow LiteOur initial research revealed three potential options TensorFlow Lite, TorchServe, and ONNX Runtime. TensorFlow Lite was our top pick because of how easy it would be to implement. We found an open source Dart package which provided Dart bindings to the TensorFlow Lite C API out-of-the-box. This allowed us to simply import the package and immediately have access to machine learning models in our application without worrying about the lower-level details in C and C++.The tiny runtime offered great performance and worked very well for the initial models we tested in production. However, we quickly ran into a huge blocker converting other model formats to TensorFlow Lite is a pain. Our first realization of this limitation came when we tried and failed to convert a simple PyTorch LSTM to TensorFlow Lite. This spurred further research into how else we might be limited. We found that many of the models we planned to work on in the future would have to be trained in TensorFlow or Keras because of conversion issues. This was problematic because we've found that there's not a one-size-fits-all machine learning framework. Some are better suited for certain tasks, and our machine learning engineers differ in preference and skill level for each of these frameworksunfortunately, we tend to favor PyTorch over TensorFlow.This issue was then compounded by the fact that TensorFlow Lite only supports a subset of the machine learning operators available in TensorFlow and Kerasimportantly, it lags in more cutting-edge operators that are required in new, high-performance architectures. This was the final straw for us with TensorFlow Lite. We were looking to implement a fairly standard transformer-based model that we'd trained in TensorFlow and found that the conversion was impossible. To take advantage of the leaps and bounds made in large language models, we needed a more flexible runtime.TorchServeHaving learned our lesson on locking ourselves into a specific training framework, we opted to skip testing out TorchServe so that we would not run into the same conversion issues.ONNX Runtime saves the dayLike TensorFlow Lite, ONNX Runtime gave us a lightweight runtime that focused on performance, but where it really stood out was the model coverage. Being built around the ONNX format, which was created to solve interoperability between machine learning tools, it allowed our machine learning engineers to choose the framework that works best for them and the task at hand and have confidence that they would be able to convert their model to ONNX in the end. This flexibility brought more fluidity to our research and development process and reduced the time spent preparing new models for release.Another large benefit of ONNX Runtime for us is a standardized model optimization pipeline, truly becoming the “balanced” tool we were looking for. By serving models in a single format, we're able to iterate through a fixed set of known optimizations until we find the desired speed, size, and accuracy tradeoff for each model. Specifically, for each of our ONNX models, the last step before production is to apply different levels of ONNX Runtime graph optimizations and linear quantization. The ease of this process is a quick win for us every time.Speaking of feature-richness, a final reason that we chose ONNX Runtime was that the baseline performance was good but there were many options we could implement down the road to improve performance. Due to the way we currently build our app, we have been limited to the vanilla CPU builds of ONNX Runtime. However, an upcoming modification to our infrastructure will allow us to utilize execution providers to serve optimized versions of ONNX Runtime based on a user's CPU and GPU architecture. We also plan to implement dynamic thread management as well as IOBinding for GPU-enabled devices.Production workflowNow that we've covered our reasoning for choosing ONNX Runtime, we'll do a brief technical walkthrough of how we utilize ONNX Runtime to facilitate model deployment.Model conversionAfter we've finished training a new model, our first step towards deployment is getting that model into an ONNX format. The specific conversion approach depends on the framework used to train the model. We have successfully used the conversion tools supplied by HuggingFace, PyTorch, and TensorFlow.Some model formats are not supported by these conversion tools, but luckily ONNX Runtime has its own internal conversion utilities. We recently used these tools to implement a T5 transformer model for code description generation. The HuggingFace model uses a BeamSearch node for text generation that we were only able to convert to ONNX using ONNX Runtime's convert generation.py tool, which is included in their transformer utilities.ONNX model optimizationOur first optimization step is running the ONNX model through all ONNX Runtime optimizations, using GraphOptimizationLevel.ORT_ENABLE_ALL, to reduce model size and startup time. We perform all these optimizations offline so that our ONNX Runtime binary doesn't have to perform them on startup. We are able to consistently reduce model size and latency very easily with this utility.Our second optimization step is quantization. Again, ONNX Runtime provides an excellent utility for this. We've used both quantize_dynamic() and quantize_static() in production, depending on our desired balance of speed and accuracy for a specific model.InferenceOnce we have an optimized ONNX model, it's ready to be put into production. We've created a thin wrapper around the ONNX Runtime C++ API which allows us to spin up an instance of an inference session given an arbitrary ONNX model. We based this wrapper on the onnxruntime-inference-examples repository. After developing this simple wrapper binary, we were able to quickly get native Dart support using the Dart FFI (Foreign Function Interface) to create Dart bindings for our C++ API. This reduces the friction between teams at Pieces by allowing our Dart software engineers to easily inject our machine learning efforts into all of our services.ConclusionOn-device machine learning requires a tool that is performant yet allows you to take full advantage of the current state-of-the-art machine learning models. ONNX Runtime gracefully meets both needs, not to mention the incredibly helpful ONNX Runtime engineers on GitHub that are always willing to assist and are constantly pushing ONNX Runtime forward to keep up with the latest trends in machine learning. It's for these reasons that we at Pieces confidently rest our entire machine learning architecture on its shoulders.Learn more about ONNX RuntimeONNX Runtime Tutorials.Video tutorials for ONNX Runtime.The post Performant on-device inferencing with ONNX Runtime appeared first on Microsoft Open Source Blog.


15 Recruiters Reveal If Data Science Certificates Are Worth It
15 Recruiters Reveal If Data Science Certificates ...

"Are data data science certificates worth it?" To answer this question properly, we interviewed 15 hiring managers in the data science field. This article will explain what certifications really mean to hiring managers and compare the best data science certifications available right now.  Bonus content We’ll also reveal the best-kept secrets among recruiters, including what they pay most attention to when weeding out resumes. How Data Science Certificates Impact Your Job Search We talked to more than a dozen hiring managers and recruiters in the data science field about what they wanted to see on applicants’ résumés.  None of them mentioned certificates. Not one. Here’s what we learned certificates certainly won’t hurt your job search as long as they’re presented correctly. But they’re unlikely to help much either, at least on their own. Why Data Science Certificates Fall Short You might be wondering why these certificates aren’t worth the paper they’re printed on.  The issue is that there’s no universal standard and no universally accredited certification authority. Different websites, schools, and online learning platforms all issue their own certificates. That means these documents could mean anything–or they could mean nothing at all!  This is why employers tend not to give them more than a passing glance when qualifying candidates. What’s the Point of a Certification Then? If certifications won’t help you get a job in data science, then what’s the point of earning one?  When it comes down to it, data scientist certifications aren’t completely useless. At Dataquest, we issue certificates when users complete any of our interactive data science courses. Why? Because it’s a great way for students to demonstrate that they’re actively engaged in learning new skills.  Recruiters do like to see that applicants are constantly trying to improve themselves. Listing data science certificates can help your job application in that way. What’s Better than a Data Science Certificate? What’s most important to recruiters is whether you can actually do the job. And certificates aren’t proof of real skills.  The best way to demonstrate your skills is by completing projects and adding them to a portfolio. Portfolios are like the holy grail of data science skills. That’s why hiring managers look at them first. Depending on what they see in your portfolio, they’ll either discard your application or send it to the next round of the hiring process.  Most of Dataquest’s courses contain guided projects you’ll complete to help you build your portfolio. Here are just a few of them Prison Break — Have some fun using Python and Jupyter Notebook to analyze a dataset of helicopter prison escapes. Exploring Hacker News Posts — Work with a dataset of submissions to Hacker News, a popular technology site. Exploring eBay Car Sales Data — Use Python to work with a scraped dataset of used cars from eBay Kleinanzeigen, a classifieds section of the German eBay website. You can sign up for free! Check out our courses here. When considering which certification to get, don’t focus on “which data science certificate is best.” Instead, find the platform that best helps you learn the fundamental data science skills. That’s what’s going to help you land a job in the field. How to Choose a Data Science Certificate Program in 5 Steps Finding a data science program that offers a certificate is easy. A quick Google search will turn up dozens. The hard part is deciding whether the certificate is worth your time and money. Let’s simplify this process. Here are five key things to consider when looking at a data science certification Content Cost Prerequisites or qualifications Time commitment Student reviews Remember, data science certificates are not worth the paper they’re printed on unless they teach you the skills employers are looking for. So that first bullet point is the most important. Think content, content, content!  Now, let’s look at some real-life examples to compare. Top Data Science Certifications 1. Dataquest What you’ll learn Dataquest offers five different career paths that cover the required skills to become a data analyst, business analyst, data scientist, and/or data engineer. The specific skills covered vary depending on which path you choose.  Topics include Python and R programming SQL and PostgreSQL Probability and statistics Machine learning Workflow skills like Git, the command line (bash/shell) And more Cost An annual Premium subscription of $399. Monthly subscriptions are also available. Prerequisites None. There is no application process (anyone can sign up and start learning today). No prior knowledge of applied statistics or programming is required. Time commitment Varies. Dataquest is a self-serve, interactive learning platform. Most learners find they’re able to meet their learning goals in six months, if studying fewer than ten hours per week. Learning goals can be accelerated with larger time commitments.  Reviews 4.85/5 average on Switchup (301 reviews) 4.76/5 on Course Report (19 reviews) 4.7/5 on G2 (46 reviews) 2. Cloudera University Data Analyst Course/Exam What you’ll learn This course focuses on data analysis using Apache products Hadoop, Hive, and Impala. It covers some SQL, but does not address Python or R programming. Cost The on-demand version costs $2,235 (180 days of access). Certification exams have an additional cost. Prerequisites Some prior knowledge of SQL and Linux command line is required. Time commitment Varies. Because this is a self-paced course, users have access for 180 days to complete 15 sections. Each section is estimated to take between 5-9 hours. The time commitment is between 75 and 135 hours. If you commit less than an hour each day, it might take you the entire 180 days. If you can devote 9 or more hours per day, it might take you a couple of weeks to complete. Reviews Third-party reviews for this program are difficult to find. 3. IBM Data Science Professional Certificate What you’ll learn This Coursera-based program covers Python and SQL. This includes some machine learning skills with Python. Cost A Coursera subscription, which is required. Based on Coursera’s 10-month completion estimate, the approximate total program cost is $390. A similar program is also available on EdX. Prerequisites None.  Time commitment Varies. Coursera suggests that the average time to complete this certificate is ten months. Reviews Quantitative third-party reviews are difficult to find. 4.6/5 average on Coursera’s own site (57,501 ratings) 4. Harvard/EdX Professional Certificate in Data Science What you’ll learn This EdX-based program covers R, some machine learning skills, and some statistics and workflow skills. It does not appear to include SQL. Cost $792.80 Prerequisites None.  Time commitment One year and five months. Course progress doesn’t carry over from session to session, so it could require more time if you’re not able to complete a course within its course run. Reviews Quantitative third-party reviews are difficult to find. 4.6/5 average on Class Central (11 reviews) 5. Certified Analytics Professional What you’ll learn Potentially nothing–this is simply a certification exam. However, test prep courses are available. Cost The certification test costs $695 and includes limited prep materials. Dedicated prep courses are available for an additional cost. Prerequisites An application is required to take the certification exam. Since no course is included, you’ll need to learn the required information on your own or sign up for a course separately. Time commitment The exam itself is relatively short. The dedicated prep courses take 1-2 months, depending on options. They are not required for taking the exam. Reviews Quantitative third-party reviews are difficult to find. Here are some independent opinions about the certification Reddit thread about CAP Quora thread about CAP 6. From Data to Insights with Google Cloud What you’ll learn This course covers SQL data analysis skills with a focus on using BigQuery and Google Cloud’s data analysis tool. Cost A Coursera subscription, which is required, costs $39/month. Coursera estimates that most students will need two months to complete the program. Prerequisites The course page says “We recommend participants have some proficiency with ANSI SQL.” It’s not clear what level of SQL proficiency is required. Time commitment Coursera estimates that most students will need two months to complete the program, but students can work at their own pace. However, courses do begin on prescribed dates. Reviews Quantitative third-party reviews are difficult to find, but 4.7/5 rating on Coursera itself (3,712 ratings) Insider Tip Beware of Prerequisites and Qualifications! Before you start looking for data science courses and certifications, there’s something you need to be aware of.  While some programs like Dataquest, Coursera, and Udemy do not require any particular background or industry knowledge, many others do have concrete prerequisites. For example, DASCA’s Senior Data Scientist Certification tracks require at least a Bachelor’s degree (some tracks require a Master’s degree). That’s in addition to a minimum of 3-5 years of professional data-related experience! Some programs, particularly offline bootcamps, also require specific qualifications or have extensive application processes. Translation? You won’t be able to jump in right away and begin learning. You’ll need to factor in time costs and application fees for these programs when making your choice. Best-Kept Secret The Myth of University Certificates in Data Science If you’re considering a data science certificate from a university, think again.  Many of the expensive certification programs offered online by brand-name schools (and even Ivy-League schools) are not very meaningful to potential employers.  A number of these programs are not even administered by the schools themselves. Instead, they’re run by for-profit, third-party firms called “Online Program Managers”.  What’s worse is that data science recruiters know this. Yes, employers are keenly aware that a Harvard-affiliated certificate from EdX and a Harvard University degree are two very different things. Plus, most data science hiring managers will not have time to research every data science certification they see on a résumé. Most résumés are only given about 30 seconds of review time. So even if your university-based certificate is actually worth something, recruiters likely won’t notice it.  The Sticker Shock of University Certificates University certificates tend to be expensive. Consider the cost of some of the most popular options out there Cornell’s three-week data analytics certificate – $3,600 Duke’s big data and data science certificate – $3,195 Georgetown’s professional certificate in data science – $7,496 UC Berkeley’s data scientist certification program – $5,100 Harvard’s data science certificate – $11,600 How to Get the Data Science Skills Employers Desire We’ve established that recruiters and hiring managers in data science are looking for real-world skills, not necessarily certifications. So what’s the best way to get the skills you need? Hands-down, the best way to acquire compelling data science skills is by digging in and getting your hands dirty with actual data.  Choose a data science course that lets you complete projects as you learn. Then, showcase your know-how with digital portfolios. That way, employers can see what skills you’ve mastered when considering your application.  At Dataquest, our courses are interactive and project-based. They’re designed so that students can immediately apply their learning and document their new skills to get the attention of recruiters. Sign up for free today, and launch your career in the growing field of data science!


Microsoft Access Tutorials
Category: Databases

Microsoft Access T ...


Views: 303 Likes: 92
We're unable to complete your request invalid_requ ...
Category: SQL

We're unable to ...


Views: 4131 Likes: 150
The code execution cannot proceed because msodbcsq ...
Category: Other

Question The code execution cannot proceed because msodbcsql17.dll was not found. Reinstalling t ...


Views: 0 Likes: 14
How to Transfer Database from Sql Server 2012 to S ...
Category: SQL

Problem    I needed to transfer the database and the sche ...


Views: 193 Likes: 68
Command-Line Switches For Microsoft Access
Category: Databases

<h2 style="font-size 3em; color #2f2f2f; margin-bottom 20px; font-family wf_segoe-ui_light, &quo ...


Views: 276 Likes: 84
Making culture count for Open Source sustainability—Celebrating FOSS Fund 25
Making culture count for Open Source sustainabilit ...

Microsoft cares about open source sustainabilitythrough its membership across multiple initiatives and foundations to ongoing empowerment efforts to encourage and reward contributions and beyond. Building a culture where every employee can visualize and embrace their responsibility to upstream projects is at the forefront of the Open Source Program’s Office (OSPO) work, which embodies the goals of Microsoft’s FOSS Fund.Building on the work of othersIn the spirit of open source, this work builds on the work of our peers, specifically the FOSS Fund model created by Indeed, and with ongoing collaboration with TODO Group members working on similar goals for supporting open source in their companies. At Microsoft, FOSS Fund is an employee-driven effort that builds awareness of open source sustainability through giving. The fund awards $10,000 USD each month to open source projects nominated by employees. Since the program's launch nearly two years ago, 34 projects have been selected, as determined by thousands of employee votes. While we don’t track how funds are used, some projects have shared that they used the funds for everything from sponsoring a contributor to creating brand assets, attending events, and covering technology and subscription expenses.Creating visibility for Open Source projects, maintainers, and their impactTo date, Microsoft's FOSS Fund has been awarded to small projects with big impact Syn and ajv, as well as larger, foundational projects with established communities like curl, Network Time Protocol (NTP), and webpack. We were proud to see employees nominating and voting for projects with impact for accessibility and inclusion like Chayn, Optikey, and NVDA. Beyond size and impact nominations spanned a range of ecosystems including gaming with Godot Engine, and mapping with the much-loved OpenStreetMap project. Employee nominations helped surface and rally support for a vast range of open technology making software better, more secure, faster, easier to document, easier to test, easier to query with projects like dbatools, OpenSSL, Babel, rust-analyzer, Reproducible Builds,QEMU, Grain and mermaid-js. Celebrating and looking forwardTo celebrate FOSS Fund 25 we invited all employees whose projects were not selected in previous FOSS Funds to propose a project for a one-time $500.00 award. This resulted in over 40 more projects and project maintainers receiving this microgrant over the last few days (with 2 still to be issued). Additionally for 2023, we will strive to grow our impact on, and to be more intentional about funding inclusion. To that end, we will add a new D&I track to the FOSS Fund with awards directed towards projects having impact on diversity and inclusion, or to efforts within upstream projects (like working groups) working on D&I efforts. The new track will run alternate months. We hope this will continue to build a culture of awareness and responsibility for open source sustainability. If you or your organization are interested in building your own FOSS Fund you can check out Indeed's free resource. If you are interested in collaborating on, or have ideas for impacting diversity and inclusion through such a program, please reach out to me, or join the TODO group Slack channel and say hello!The post Making culture count for Open Source sustainability—Celebrating FOSS Fund 25 appeared first on Microsoft Open Source Blog.


How to install DotNet 8 runtime on Linux Ubuntu
Category: .NET 7

Question How do I install dotnet core 8 runtime on linux ubuntu?Answer Follow the steps ...


Views: 0 Likes: 22
Error 0xc0202009: Data Flow Task 1: SSIS Error Cod ...
Category: Servers

Question I came about this SQL Server ...


Views: 0 Likes: 44
SQL Developer
Category: Jobs

Would you be interested in the following long-term opportunity? &nbsp; If not int ...


Views: 0 Likes: 64
A Data CEO’s Guide to Becoming a Data Scientist From Scratch
A Data CEO’s Guide to Becoming a Data Scientist Fr ...

If you want to know how to become a data scientist, then you’re in the right place. I’ve been where you are, and now I want to help. A decade ago, I was just a college graduate with a history degree. I then became a machine learning engineer, data science consultant, and now CEO of Dataquest. If I could do everything over, I would follow the steps I’m going to share with you in this article. It would have fast-tracked my career, saved me thousands of hours, and prevented a few gray hairs. The Wrong and Right Way  When I was learning, I tried to follow various online data science guides, but I ended up bored and without any actual data science skills to show for my time.  The guides were like a teacher at school handing me a bunch of books and telling me to read them all — a learning approach that never appealed to me. It was frustrating and self-defeating. Over time, I realized that I learn most effectively when I'm working on a problem I'm interested in.  And then it clicked. Instead of learning a checklist of data science skills, I decided to focus on building projects around real data. Not only did this learning method motivate me, it also mirrored the work I’d do in an actual data scientist role. I created this guide to help aspiring data scientists who are in the same position I was in. In fact, that’s also why I created Dataquest. Our data science courses are designed to take you from beginner to job-ready in less than 8 months using actual code and real-world projects. However, a series of courses isn’t enough. You need to know how to think, study, plan, and execute effectively if you want to become a data scientist. This actionable guide contains everything you need to know. How to Become a Data Scientist Step 1 Question Everything Step 2 Learn The Basics Step 3 Build Projects Step 4 Share Your Work Step 5 Learn From Others Step 6 Push Your Boundaries Now, let’s go over each of these one by one. Step 1 Question Everything The data science and data analytics field is appealing because you get to answer interesting questions using actual data and code. These questions can range from Can I predict whether a flight will be on time? to How much does the U.S. spend per student on education?  To answer these questions, you need to develop an analytical mindset. The best way to develop this mindset is to start with analyzing news articles. First, find a news article that discusses data. Here are two great examples Can Running Make You Smarter? or Is Sugar Really Bad for You?.  Then, think about the following How they reach their conclusions given the data they discuss How you might design a study to investigate further What questions you might want to ask if you had access to the underlying data Some articles, like this one on gun deaths in the U.S. and this one on online communities supporting Donald Trump actually have the underlying data available for download. This allows you to explore even deeper. You could do the following Download the data, and open it in Excel or an equivalent tool See what patterns you can find in the data by eyeballing it Do you think the data supports the conclusions of the article? Why or why not? What additional questions do you think you can use the data to answer? Here are some good places to find data-driven articles FiveThirtyEight New York Times Vox The Intercept Reflect After a few weeks of reading articles, reflect on whether you enjoyed coming up with questions and answering them. Becoming a data scientist is a long road, and you need to be very passionate about the field to make it all the way.  Data scientists constantly come up with questions and answer them using mathematical models and data analysis tools, so this step is great for understanding whether you'll actually like the work. If You Lack Interest, Analyze Things You Enjoy Perhaps you don't enjoy the process of coming up with questions in the abstract, but maybe you enjoy analyzing health or finance data. Find what you're passionate about, and then start viewing that passion with an analytical mindset. Personally, I was very interested in stock market data, which motivated me to build a model to predict the market. If you want to put in the months of hard work necessary to learn data science, working on something you’re passionate about will help you stay motivated when you face setbacks. Step 2 Learn The Basics Once you've figured out how to ask the right questions, you're ready to start learning the technical skills necessary to answer them. I recommend learning data science by studying the basics of programming in Python. Python is a programming language that has consistent syntax and is often recommended for beginners. It’s also versatile enough for extremely complex data science and machine learning-related work, such as deep learning or artificial intelligence using big data. Many people worry about which programming language to choose, but here are the key points to remember Data science is about answering questions and driving business value, not about tools Learning the concepts is more important than learning the syntax Building projects and sharing them is what you'll do in an actual data science role, and learning this way will give you a head start Super important note The goal isn’t to learn everything; it’s to learn just enough to start building projects.  Where You Should Learn Here are a few great places to learn Dataquest — I started Dataquest to make learning Python for data science or data analysis easier, faster, and more fun. We offer basic Python fundamentals courses, all the way to an all-in-one path consisting of all courses you need to become a data scientist.  Learn Python the Hard Way — a book that teaches Python concepts from the basics to more in-depth programs. The Python Tutorial — a free tutorial provided by the main Python site. The key is to learn the basics and start answering some of the questions you came up with over the past few weeks browsing articles. Step 3 Build Projects As you're learning the basics of coding, you should start building projects that answer interesting questions that will showcase your data science skills.  The projects you build don't have to be complex. For example, you could analyze Super Bowl winners to find patterns.  The key is to find interesting datasets, ask questions about the data, then answer those questions with code. If you need help finding datasets, check out this post for a good list of places to find them. As you're building projects, remember that Most data science work is data cleaning. The most common machine learning technique is linear regression. Everyone starts somewhere. Even if you feel like what you're doing isn't impressive, it's still worth working on. Where to Find Project Ideas Not only does building projects help you practice your skills and understand real data science work, it also helps you build a portfolio to show potential employers.  Here are some more detailed guides on building projects on your own Storytelling with data Machine learning project Additionally, most of Dataquest’s courses contain interactive projects that you can complete while you’re learning. Here are just a few examples Prison Break — Have some fun, and analyze a dataset of helicopter prison escapes using Python and Jupyter Notebook. Exploring Hacker News Posts — Work with a dataset of submissions to Hacker News, a popular technology site. Exploring eBay Car Sales Data — Use Python to work with a scraped dataset of used cars from eBay Kleinanzeigen, a classifieds section of the German eBay website. Star Wars Survey — Work with Jupyter Notebook to analyze data on the Star Wars movies. Analyzing NYC High School Data — Discover the SAT performance of different demographics using scatter plots and maps. Predicting the Weather Using Machine Learning — Learn how to prepare data for machine learning, work with time series data, measure error, and improve your model performance. Add Project Complexity After building a few small projects, it's time to kick it up a notch! We need to add layers of project complexity to learn more advanced topics. At this step, however, it's crucial to execute this in an area you're interested in. My interest was the stock market, so all my advanced projects had to do with predictive modeling. As your skills grow, you can make the problem more complex by adding nuances like minute-by-minute prices and more accurate predictions. Check out this article on Python projects for more inspiration. Step 4 Share Your Work Once you've built a few data science projects, share them with others on GitHub! Here’s why It makes you think about how to best present your projects, which is what you'd do in a data science role. They allow your peers to view your projects and provide feedback. They allow employers to view your projects. Helpful resources about project portfolios How To Present Your Data Science Portfolio on GitHub Data Science Portfolios That Will Get You the Job Start a Simple Blog Along with uploading your work to GitHub, you should also think about publishing a blog. When I was learning data science, writing blog posts helped me do the following Capture interest from recruiters Learn concepts more thoroughly (the process of teaching really helps you learn) Connect with peers Here are some good topics for blog posts Explaining data science and programming concepts Discussing your projects and walking through your findings Discussing how you’re learning data science Here’s an example of a visualization I made on my blog many years ago that shows how much each Simpsons character likes the others Step 5 Learn From Others After you've started to build an online presence, it's a good idea to start engaging with other data scientists. You can do this in-person or in online communities. Here are some good online communities /r/datascience Data Science Slack Quora Kaggle Here at Dataquest, we have an online community that learners can use to receive feedback on projects, discuss tough data-related problems, and build relationships with data professionals. Personally, I was very active on Quora and Kaggle when I was learning, which helped me immensely. Engaging in online communities is a good way to do the following Find other people to learn with Enhance your profile and find opportunities Strengthen your knowledge by learning from others You can also engage with people in-person through Meetups. In-person engagement can help you meet and learn from more experienced data scientists in your area. Step 6 Push Your Boundaries What kind of data scientists to companies want to hire? The ones that find critical insights that save them money or make their customers happier. You have to apply the same process to learning — keep searching for new questions to answer, and keep answering harder and more complex questions.  If you look back on your projects from a month or two ago, and you don’t see room for improvement, you probably aren't pushing your boundaries enough. You should be making strong progress every month, and your work should reflect that. Here are some ways to push your boundaries and learn data science faster Try working with a larger dataset  Start a data science project that requires knowledge you don't have Try making your project run faster Teach what you did in a project to someone else You’ve Got This! Studying to become a data scientist or data engineer isn't easy, but the key is to stay motivated and enjoy what you're doing. If you're consistently building projects and sharing them, you'll build your expertise and get the data scientist job that you want. I haven't given you an exact roadmap to learning data science, but if you follow this process, you'll get farther than you imagined you could. Anyone can become a data scientist if you're motivated enough. After years of being frustrated with how conventional sites taught data science, I created Dataquest, a better way to learn data science online. Dataquest solves the problems of MOOCs, where you never know what course to take next, and you're never motivated by what you're learning. Dataquest leverages the lessons I've learned from helping thousands of people learn data science, and it focuses on making the learning experience engaging. At Dataquest, you'll build dozens of projects, and you’ll learn all the skills you need to be a successful data scientist. Dataquest students have been hired at companies like Accenture and SpaceX . Good luck becoming a data scientist! Becoming a Data Scientist — FAQs What are the data scientist qualifications? Data scientists need to have a strong command of the relevant technical skills, which will include programming in Python or R, writing queries in SQL, building and optimizing machine learning models, and often some "workflow" skills like Git and the command line. Data scientists also need strong problem-solving, data visualization, and communication skills. Whereas a data analyst will often be given a question to answer, a data scientist is expected to explore the data and find relevant questions and business opportunities that others may have missed. While it is possible to find work as a data scientist with no prior experience, it's not a common path. Normally, people will work as a data analyst or data engineer before transitioning into a data scientist role. What are the education requirements for a data scientist? Most data scientist roles will require at least a Bachelor's degree. Degrees in technical fields like computer science and statistics may be preferred, as well as advanced degrees like Ph.D.s and Master’s degrees. However, advanced degrees are generally not strictly required (even when it says they are in the job posting). What employers are concerned about most is your skill-set. Applicants with less advanced or less technically relevant degrees can offset this disadvantage with a great project portfolio that demonstrates their advanced skills and experience doing relevant data science work. What skills are needed to become a data scientist? Specific requirements can vary quite a bit from job to job, and as the industry matures, more specialized roles will emerge. In general, though, the following skills are necessary for virtually any data science role Programming in Python or R SQL Probability and statistics Building and optimizing machine learning models Data visualization Communication Big data Data mining Data analysis Every data scientist will need to know the basics, but one role might require some more in-depth experience with Natural Language Processing (NLP), whereas another might need you to build production-ready predictive algorithms. Is it hard to become a data scientist? Yes — you should expect to face challenges on your journey to becoming a data scientist. This role requires fairly advanced programming skills and statistical knowledge, in addition to strong communication skills. Anyone can learn these skills, but you'll need motivation to push yourself through the tough moments. Choosing the right platform and approach to learning can also help make the process easier. How long does it take to become a data scientist? The length of time it takes to become a data scientist varies from person to person. At Dataquest, most of our students report reaching their learning goals in one year or less. How long the learning process takes you will depend on how much time you're able to dedicate to it. Similarly, the job search process can vary in length depending on the projects you've built, your other qualifications, your professional background, and more. Is data science a good career choice? Yes — a data science career is a fantastic choice. Demand for data scientists is high, and the world is generating a massive (and increasing) amount of data every day.  We don't claim to have a crystal ball or know what the future holds, but data science is a fast-growing field with high demand and lucrative salaries. What is the data scientist career path? The typical data scientist career path usually begins with other data careers, such as data analysts or data engineers. Then it moves into other data science roles via internal promotion or job changes. From there, more experienced data scientists can look for senior data scientist roles. Experienced data scientists with management skills can move into director of data science and similar director and executive-level roles. What salaries do data scientists make? Salaries vary widely based on location and the experience level of the applicant. On average, however, data scientists make very comfortable salaries. In 2022, the average data scientist salary is more than $120,000 USD per year in the US. And other data science roles also command high salaries Data analyst $96,707 Data engineer $131,444 Data architect $135,096 Business analyst $97,224 Which certification is best for data science? Many assume that a data science certification or completion of a data science bootcamp is something that hiring managers are looking for in qualified candidates, but this isn’t true. Hiring managers are looking for a demonstration of the skills required for the job. And unfortunately, a data analytics or data science certificate isn’t the best showcase of your skills.  The reason for this is simple.  There are dozens of bootcamps and data science certification programs out there. Many places offer them — from startups to universities to learning platforms. Because there are so many, employers have no way of knowing which ones are the most rigorous.  While an employer may view a certificate as an example of an eagerness to continue learning, they won’t see it as a demonstration of skills or abilities. The best way to showcase your skills properly is with projects and a robust portfolio.


PERMANENT ROLE | WEB APPLICATION DEVELOPER | MOREL ...
Category: Jobs

<span style="font-weight bold; tex ...


Views: 300 Likes: 94
How to Insert two corresponding columns into a tem ...
Category: Other

Question How do you insert two columns corresponding to each other in a temp ta ...


Views: 0 Likes: 9
Improve BERT inference speed by combining the power of Optimum, OpenVINO™, ONNX Runtime, and Azure
Improve BERT inference speed by combining the powe ...

In this blog, we will discuss one of the ways to make huge models like BERT smaller and faster with OpenVINO Neural Networks Compression Framework (NNCF) and ONNX Runtime with OpenVINO Execution Provider through Azure Machine Learning.Azure Machine LearningBusiness-critical machine learning models at scale.Learn moreBig models are slow, we need to make them fasterToday’s best-performing language processing models use huge neural architectures with hundreds of millions of parameters. State-of-the-art transformer-based architectures like BERT are available as pretrained models for anyone to use for any language task.The big models have outstanding accuracy, but they are difficult to use in practice. These models are resource hungry due to a large number of parameters. These issues become worse when serving the fine-tuned model and it requires a lot of memory and time to process a single message. A state-of-the-art model is not good if it can handle only one message per second. To improve the throughput, we need to accelerate the well-performing BERT model, by reducing the computation or the number of operations with the help of quantization.Overview of Optimum Intel and quantization aware trainingOptimum Intel is an extension for the Hugging Face Optimum library with OpenVINO runtime as a backend for the Transformers architectures. It also provides an interface to Intel NNCF (Neural Network Compression Framework) package. It helps implement Intel's optimizations through NNCF with changes to just a few lines of code in the training pipeline.Quantization aware training (QAT) is a widely used technique for optimizing models during training. It inserts nodes into the neural network during training that simulates the effect of lower precision. This allows the training algorithm to consider quantization errors as part of the overall training loss that gets minimized during training. QAT has better accuracy and reliability than carrying out quantization after the model has been trained. The output after training with our tool is a quantized PyTorch model, ONNX model, and IR.xml.Overview of ONNXRuntime, and OpenVINO Execution ProviderONNX Runtime is an open source project that is designed to accelerate machine learning across a wide range of frameworks, operating systems, languages, and hardware platforms. It enables the acceleration of machine learning inferencing across all of your deployment targets using a single set of APIs.Intel and Microsoft joined hands to create the OpenVINO Execution Provider (OVEP) for ONNX Runtime, which enables ONNX models for running inference using ONNX Runtime APIs while using the OpenVINO Runtime as a backend. With the OpenVINO Execution Provider, ONNX Runtime delivers better inferencing performance on the same hardware compared to generic acceleration on Intel CPU, GPU, and VPU. Now you've got a basic understanding of quantization, ONNX Runtime, and OVEP, let’s take the best of both worlds and stitch the story together.Putting the tools together to achieve better performanceIn our next steps, we will be doing quantization aware training using Optimum-Intel and Inference using Optimum-ORT with OpenVINO Execution Provider through Azure Machine Learning. Optimum can be used to load optimized models from the Hugging Face Hub and create pipelines to run accelerated inferences.Converting PyTorch FP32 model to INT8 ONNX model with QATWhen utilizing the Hugging Face training pipelines all you need is to update a few lines of code and you can invoke the NNCF optimizations for quantizing the model. The output of this would be an optimized INT8 PyTorch model, ONNX model, and OpenVINO IR. See the flow diagram belowFor this case study, we have chosen the bert-squad pretrained model from Hugging Face. This has been pretrained on the SQuAD dataset for the question-answering use case. QAT can be applied by replacing the Transformers Trainer with the Optimum (OVTrainer). See belowfrom trainer_qa import QuestionAnsweringOVTrainerRun the training pipeline1. Import OVConfigfrom optimum.intel.openvino import OVConfigfrom trainer_qa import QuestionAnsweringOVTrainer2. Initialize a config from the ov_config = OVConfig() 3. Initialize our Trainer trainer = QuestionAnsweringOVTrainer()Comparison of FP32 model and INT8 ONNX model with Netron model visualization toolWhen compared with FP32, the INT8 model has QuantizeLinear and DequantizeLinear operations added to mimic the lower precision after the QAT.Fig1 FP32 modelFig2 INT8 modelTo replicate this example check out the reference code with detailed instructions on QAT and Inference using OpenVINO and Azure Machine Learning Jupyter Notebooks on GitHub.Performance improvement resultsAccuracyOriginal FP32QAT INT8ExplanationF193.192.83In this case, it’s computed over the individual words in the prediction against those in the True Answer. The number of shared words between the prediction and the truth is the basis of the F1 score precision is the ratio of the number of shared words to the total number of words in the prediction, and recall is the ratio of the number of shared words to the total number of words in the ground truth.Eval_exact86.9186.94This metric is as simple as it sounds. For each question + answer pair, if the characters of the model’s prediction exactly match the characters of (one of) the True Answer(s), EM = 1, otherwise EM = 0. This is a strict all-or-nothing metric; being off by a single character results in a score of 0. When assessing against a negative example, if the model predicts any text at all, it automatically receives a 0 for that example.Comparison of ONNXRUNTIME_PERF_TEST application for ONNX-FP32 and ONNX-INT8 modelsWe've used ONNXRuntime APIs for running inference for the BERT model. As you can see the performance for the INT8 optimized model improved almost to 2.95x when compared to FP32 without much compromise in the accuracy.Quantized PyTorch, ONNX, and INT8 models can also be served using OpenVINO Model Server for high-scalability and optimization for Intel solutions so that you can take advantage of all the power of the Intel Xeon processor or Intel's AI accelerators and expose it over a network interface.Optimize speed and performance As neural networks move from servers to the edge, optimizing speed and size becomes even more important. In this blog, we gave an overview of how to use open source tooling to make it easy to improve performance.ReferencesEnhanced Low-Precision Pipeline to Accelerate Inference with OpenVINO toolkit.Developer Guide Model Optimization with the OpenVINO Toolkit. Evaluating QA Metrics, Predictions, and the Null Response.SW/HW configurationFramework configuration ONNXRuntime, Optimum-Intel [NNCF]Application configuration ONNXRuntime, EP OpenVINO ./onnx_perf_test OPENVINO 2022.2 ./benchmark_appInput Question and contextApplication Metric Normalized throughputPlatform Intel Icelake-8380Number of Nodes 2Number of Sockets 2CPU or Accelerator Intel(R) Xeon(R) Platinum 8380 CPU @ 2.30GHzCores/socket, Threads/socket or EU/socket 40,2ucode 0xd000375HT EnabledTurbo EnabledBIOS Version American Megatrends International, LLC. V1.4System DDR Mem Config slots / cap / run-speed 32/32 GB/3200 MT/sTotal Memory/Node (DDR+DCPMM) 1024GBStorage boot INTEL_SSDSC2KB019T8 1.8TNIC 2 x Ethernet Controller X710 for 10GBASE-TOS Ubuntu 20.04.4 LTSKernel 5.15.0-46-genericThe post Improve BERT inference speed by combining the power of Optimum, OpenVINO™, ONNX Runtime, and Azure appeared first on Microsoft Open Source Blog.


High-performance deep learning in Oracle Cloud with ONNX Runtime
High-performance deep learning in Oracle Cloud wit ...

This blog is co-authored by Fuheng Wu, Principal Machine Learning Tech Lead, Oracle Cloud AI Services, Oracle Inc.Enabling scenarios through the usage of Deep Neural Network (DNN) models is critical to our AI strategy at Oracle, and our Cloud AI Services team has built a solution to serve DNN models for customers in the healthcare sector. In this blog post, we'll share challenges our team faced, and how ONNX Runtime solves these as the backbone of success for high-performance inferencing.Challenge 1 Models from different training frameworksTo provide the best solutions for specific AI tasks, Oracle Cloud AI supports a variety of machine learning models trained from different frameworks, including PyTorch, TensorFlow, PaddlePaddle, and Scikit-learn. While each of these frameworks has its own built-in serving solutions, maintaining so many different serving frameworks would be a nightmare in practice. Therefore, one of our biggest priorities was to find a versatile unified serving solution to streamline maintenance.Challenge 2 High performance across diverse hardware ecosystemFor Oracle Cloud AI services, low latency and high accuracy are crucial for meeting customers' requirements. The DNN model servers are hosted in Oracle Cloud Compute clusters, and most of them are equipped with different CPUs (Intel, AMD, and ARM) and operating systems. We needed a solution that would run well on all the different Oracle compute shapes while remaining easy to maintain.Solution ONNX RuntimeIn our search for the best DNN inference engine to support our diverse models and perform well across our hardware portfolio, ONNX Runtime caught our eye and stood out from alternatives.ONNX Runtime is a high-performance, cross-platform accelerator for machine learning models. Because ONNX Runtime supports the Open Neural Network Exchange (ONNX), models trained from different frameworks can be converted to the ONNX format and run on all platforms supported by ONNX Runtime. This makes it easy to deploy machine learning models across different environments, including cloud, edge, and mobile devices. ONNX Runtime supports all the Oracle Cloud compute shapes including VM.Standard.A1.Flex (ARM CPU), VM.Standard.3/E3/4.Flex (AMD and Intel CPU), and VM.Optimized3.Flex (Intel CPU). Not only does ONNX Runtime run on a variety of hardware, but its execution provider interface also allows it to efficiently utilize accelerators specific to each hardware.Validating ONNX RuntimeBased on our evaluation, we were optimistic about using ONNX Runtime as our model inferencing solution, and the next step was to verify its compatibility and performance to ensure it could meet our targets.It was relatively easy to verify hardware, operating system, and model compatibility by just launching the model servers with ONNX Runtime in the cloud. To systematically measure and compare ONNX Runtime's performance and accuracy to alternative solutions, we developed a pipeline system. ONNX Runtime's extensibility simplified the benchmarking process, as it allowed us to seamlessly integrate other inference engines by compiling them as different execution providers (EP) for ONNX Runtime. Thus, ONNX Runtime served not only as a runtime engine but as a platform where we could support many inference engines and choose the best one to suit our needs at runtime.We compiled TVM, OneDNN, and OpenVINO into ONNX Runtime, and it was very convenient to switch between these different inference engines with a unified programming interface. For example, in Oracle's VM.Optimized3.Flex and BM.Optimized 3.36 compute instances, where the Intel(R) Xeon(R) Gold 6354 CPU is available, OpenVINO could run faster than other inference engines by a large margin due to the AVX VNNI instruction set support. We didn't want to change our model serving code to fit different serving engines, and ONNX Runtime's EP feature conveniently allowed us to write the code once and run it with different inference engines.Benchmarking ONNX Runtime with alternative inference enginesWith our pipeline configured to test all relevant inference engines, we began the benchmarking process for different models and environments. In our tests, ONNX Runtime was the clear winner against alternatives by a big margin, measuring 30 to 300 percent faster than the original PyTorch inference engine regardless of whether just-in-time (JIT) was enabled.ONNX Runtime on CPU was also the best solution compared to DNN compilers like TVM, OneDNN (formerly known as Intel MKL-DNN), and MLIR. OneDNN was the closest to ONNX Runtime, but still 20 to 80 percent slower in most cases. MLIR was not as mature as ONNX Runtime two years ago, and the conclusion still holds at the time of this writing. It doesn't support dynamic input shape models and only supports limited ONNX operators. TVM also performed well in static shapes model inference, but for accuracy consideration, most of our models use dynamic shape input and TVM raised exceptions for our models. Even with static shape models, we found TVM to be slower than ONNX Runtime.We investigated the reason for ONNX Runtime's strong performance and found ONNX Runtime to be extremely optimized for CPU servers. All the core algorithms, such as the crucial 2D convolution, transpose convolution, and pooling algorithm, are delicately written with assembly code by hand and statically compiled into the binary. It even won against TVM's autotuning without any extra preprocessing or tuning. OneDNN's JIT is designed to be flexible and extensible and can dynamically generate machine code for DNN primitives on the fly. However, it still lost to ONNX Runtime in our benchmark tests because ONNX Runtime statically compiled the primitives beforehand. Theoretically, there are several tunable parameters in the DNN primitives algorithms, so in some cases like edge devices with different register files and CPU cache sizes, there might be better algorithms or implementations with different choices of parameters. However, for the DNN models in Oracle Cloud Compute CPU clusters, ONNX Runtime is a match in heaven and is the fastest inference engine we have ever used.ConclusionWe really appreciate the ONNX Runtime team for open-sourcing this amazing software and continuously improving it. This enables Oracle Cloud AI Services to provide a performant DNN model serving solution to our customers and we hope that others will also find our experience helpful.Learn more about ONNX RuntimeONNX Runtime Tutorials.Video tutorials for ONNX Runtime.The post High-performance deep learning in Oracle Cloud with ONNX Runtime appeared first on Microsoft Open Source Blog.


[Solved] How Resolve Suspected Database in Microso ...
Category: SQL

Question How do you remove the status of "Emergency" from the ...


Views: 168 Likes: 68
[Solved]: Invalid version: 16. (Microsoft.SqlServe ...
Category: Other

Question How do you solve the error below? Invalid versi ...


Views: 0 Likes: 24
Microsoft Channel 9 for Developers
Category: Technology

Microsoft Channel 9 is the best Developer Channel on the Net. L ...


Views: 246 Likes: 78
Login failed for user . (Microsoft SQL, Error: 184 ...
Category: SQL

Problem When you are trying to login into SQL Server with a ne ...


Views: 373 Likes: 108
Announcing the availability of Feathr 1.0
Announcing the availability of Feathr 1.0

This blog is co-authored by Edwin Cheung, Principal Software Engineering Manager and Xiaoyong Zhu, Principal Data Scientist.Feathr is an enterprise scale feature store, which facilitates the creation, engineering, and usage of machine learning features in production. It has been used by many organizations as an online/offline store, as well as for real-time streaming.Today, we are excited to announce the much-anticipated availability of the OSS Feathr 1.0. It contains many new features and enhancements since Feathr became open-source one year ago. Similar to the online transformation, rapid sandbox environment, MLOPs V2 accelerator integration really accelerates the development and deployment of machine learning projects at enterprise scale.Online transformation via domain specific language (DSL)In various machine learning scenarios, features generation is required for both training and inferences. There is a limitation where data source cannot come from online service, as currently transformation only happens before feature data is published to the online store and the transformation is required close to real-time. In such cases, there is a need for a mechanism where the user has the ability to run transformation on the inference data dynamically before inferencing via the model. The new online transformation via DSL feature addresses these challenges by using a custom transformation engine that can process transformation requests and responses close to real-time on demand. It allows definition of transformation logic declaratively using DSL syntax which is based on EBNF. It also provides extensibility, where there is a need to define custom complex transformation, by supporting user defined function (UDF) written in Python or Java.nyc_taxi_demo(pu_loc_id as int, do_loc_id as int, pu_time as string, do_time as string, trip_distance as double, fare_amount as double) project duration_second = (to_unix_timestamp(do_time, "%Y/%-m/%-d %-H%-M") - to_unix_timestamp(pu_time, "%Y/%-m/%-d %-H%-M"))| project speed_mph = trip_distance * 3600 / duration_second;This declarative logic runs in a new high-performance DSL engine. We provide HELM Chart to deploy this service in a container-based technology such as the Azure Kubernetes Service (AKS). The transformation engine can also run as a standalone executable, which is a HTTP server that can be used to transform data for testing purposes. feathrfeaturestore/feathrpiperlatest.curl -s -H"content-typeapplication/json" http//localhost8000/process -d'{"requests" [{"pipeline" "nyc_taxi_demo_3_local_compute","data" {"pu_loc_id" 41,"do_loc_id" 57,"pu_time" "2020/4/1 041","do_time" "2020/4/1 056","trip_distance" 6.79,"fare_amount" 21.0}}]}' It also provides the ability to auto-generate the DSL file if there are already predefined feature transformations, which have been created for the offline-transformation. Online transformation performance benchmarkIt is imperative that online transformation performs close to real-time and meets low latency demand with high queries per second (QPS) transformation for many of the enterprise customers’ needs. To determine the performance, we have conducted a benchmark on three tests. First, deployment on AKS with traffic going through ingress controller. Second, traffic going through AKS internal load balance, and finally, via the localhost.  Benchmark ATraffic going through ingress controller (AKS)Infrastructure setupTest agent runs on 1 pod on node with size Standard_D8ds_v5Transform function deployed as docker image running on 1 pod on a different node with size Standard_D8ds_v5 in same AKS.Agent sends request thru service hostname which means traffic will go thru ingress controller.Test command ab -k -c {concurrency_count} -n 1000000 (http//feathr-online.trafficmanager.net/healthz)Benchmark A resultTotal RequestsConcurrencyp90p95p99request/sec1000000100349437101000000200681543685100000030010111843378100000040013152143220100000050016192442406Benchmark BTraffic goes thru AKS internal load balancer (AKS)Benchmark BInfrastructure setupTest agent runs on 1 pod on node with size Standard_D8ds_v5Transform function deployed as docker image running on 1 pod on a different node with size Standard_D8ds_v5 in same AKS.Agent sends request thru service pip which means traffic will go thru internal load balancer.Test command ab -k -c {concurrency_count} -n 1000000 ab -k -c 100 -n 1000000 http//10.0.187.2/healthzBenchmark B resultTotal RequestsConcurrencyp90p95p99request/sec10000001003444767310000002005784703510000003009101246613100000040011121545362100000050014151944941 Benchmark CTraffic going through local host (AKS)Infrastructure setupTest agent runs on 1 pod on node with size Standard_D8ds_v5.Transform function deployed as docker image running on the same pod.Agent sends request thru localhost which means there are not network traffic at all.Test command ab -k -c {concurrency_count} -n 1000000 (http//localhost/healthz)Benchmark C resultTotal RequestsConcurrencyp90p95p99Request/sec1000000100223594661000000200445594331000000300668601841000000400891059622100000050010111459031Benchmark summaryIf transform service and up-streaming are in same host/pod, the p95 latency result is very good, stay within 10ms if concurrency < 500.If transform service and up-streaming are in different host/pod, the p95 latency result might get reduced with 2-4ms, if traffic goes thru internal load balance.If transform service and up-streaming are in different host/pod, the p95 latency result might get reduced with 2-8ms, if traffic goes thru ingress controller.Benchmark thanks to Blair Chan and Chen Xu.For more details, check out the online transformation guide.Getting started with sandbox environmentThis is an exciting feature, especially for data scientists, who may not have the necessary infrastructure background or know how to deploy the infrastructure in the cloud. The sandbox is a fully-featured, quick-start Feathr environment that enables organizations to rapidly prototype various capabilities of Feathr without the burden of full-scale infrastructure deployment. It is designed to make it easier for users to get started quickly, validate feature definitions and new ideas, and interactive experience.By default, it comes with a Jupyter notebook environment to interact with the Feathr platform. Users can also use the user experience (UX) to visualize the features, lineage, and other capabilities.To get started, check out the quick start guide to local sandbox.Feathr with MlOps V2 acceleratorMLOps V2 solution accelerator provides a modular end-to-end approach to MLOps in Azure based on pattern architecture. We are pleased to announce an initial integration of Feathr to the classical pattern that enables Terraform-based infrastructure deployment as part of the infrastructure provisioning with Azure machine learning (AML) workspace. With this integration, enterprise customers can use the templates to customize their continuous integration and continuous delivery (CI/CD) workflows to run end-to-end MlOps in their organization. Check out the Feathr integration with MLOps V2 deployment guide.Feathr GUI enhancementWe have added a number of enhancements to the graphical user interface (GUI) to improve the usability. These include support for registering features, support for deleting features, support for displaying version, and quick access to lineage via the top menu. Try out our demo UX on our live demo site.What's nextThe Feathr journey has just begun, this is the first stop to many great things to come. So, stay tuned for many enterprise enhancements, security, monitoring, and compliance features with a more enriched MLOps experience. Check out how you can also contribute to this great project, and if you have not already, you can join our slack channel here.The post Announcing the availability of Feathr 1.0 appeared first on Microsoft Open Source Blog.


SQL Developer
Category: Jobs

Would you be interested in the following long-term opportunity? &nbsp; If not int ...


Views: 0 Likes: 73
SignalR Error in Dot Net Core 3.1 (.Net Core 3.1) ...
Category: .Net 7

Problems when implementing SignalR in Dot Net Core 3.1 (.Net Core 3.1) Error Failed to invoke 'H ...


Views: 2201 Likes: 100
How to Optimize SQL Query in SQL Server
Category: Other

There are several ways to tune the performance of an SQL query. Here are a few tips < ...


Views: 0 Likes: 9
Microsoft Office Training Page
Category: Technology

This data was edited just for testing.<a href ...


Views: 386 Likes: 85
The multi-part identifier "inserted.Id" could not ...
Category: SQL

This error message typically occurs in SQL Server when you're trying to use a column value from a ...


Views: 0 Likes: 40
How to optimize sql query in Microsoft SQL Server
Category: SQL

1. Keep in mind that when you write a Store Procedure SQL Server generates an SQL plan. If you ha ...


Views: 463 Likes: 102
Faster inference for PyTorch models with OpenVINO Integration with Torch-ORT
Faster inference for PyTorch models with OpenVINO ...

Deep learning models are everywhere without us even realizing it. The number of AI use cases have been increasing exponentially with the rapid development of new algorithms, cheaper compute, and greater access to data. Almost every industry has deep learning applications, from healthcare to education to manufacturing, construction, and beyond. Many developers opt to use popular AI Frameworks like PyTorch, which simplifies the process of analyzing predictions, training models, leveraging data, and refining future results.PyTorch on AzureGet an enterprise-ready PyTorch experience in the cloud.Learn morePyTorch is a machine learning framework used for applications such as computer vision and natural language processing, originally developed by Meta AI and now a part of the Linux Foundation umbrella, under the name of PyTorch Foundation. PyTorch has a powerful, TorchScript-based implementation that transforms the model from eager to graph mode for deployment scenarios.One of the biggest challenges PyTorch developers face in their deep learning projects is model optimization and performance. Oftentimes, the question arises How can I improve the performance of my PyTorch models? As you might have read in our previous blog, Intel and Microsoft have joined hands to tackle this problem with OpenVINO Integration with Torch-ORT. Initially, Microsoft had released Torch-ORT, which focused on accelerating PyTorch model training using ONNX Runtime. Recently, this capability was extended to accelerate PyTorch model inferencing by using the OpenVINO toolkit on Intel central processing unit (CPU), graphical processing unit (GPU), and video processing unit (VPU) with just two lines of code.Figure 1 OpenVINO Integration with Torch-ORT Application Flow. This figure shows how OpenVINO Integration with Torch-ORT can be used in a Computer Vision Application.By adding just two lines of code, we achieved 2.15 times faster inference for PyTorch Inception V3 model on an 11th Gen Intel Core i7 processor1. In addition to Inception V3, we also see performance gains for many popular PyTorch models such as ResNet50, Roberta-Base, and more. Currently, OpenVINO Integration with Torch-ORT supports over 120 PyTorch models from popular model zoo's, like Torchvision and Hugging Face.Figure 2 FP32 Model Performance of OpenVINO Integration with Torch-ORT as compared to PyTorch. This chart shows average inference latency (in milliseconds) for 100 runs after 15 warm-up iterations on an 11th Gen Intel(R) Core (TM) i7-1185G7 @ 3.00GHz.FeaturesOpenVINO Integration with Torch-ORT introduces the following featuresInline conversion of static/dynamic input shape modelsGraph partitioningSupport for INT8 modelsDockerfiles/Docker ContainersInline conversion of static/dynamic input shape modelsOpenVINO Integration with Torch-ORT performs inferencing of PyTorch models by converting these models to ONNX inline and subsequently performing inference with OpenVINO Execution Provider. Currently, both static and dynamic input shape models are supported with OpenVINO Integration with Torch-ORT. You also have the ability to save the inline exported ONNX model using the DebugOptions API.Graph partitioningOpenVINO Integration with Torch-ORT supports many PyTorch models by leveraging the existing graph partitioning feature from ONNX Runtime. With this feature, the input model graph is divided into subgraphs depending on the operators supported by OpenVINO and the OpenVINO-compatible subgraphs run using OpenVINO Execution Provider and unsupported operators fall back to MLAS CPU Execution Provider.Support for INT8 modelsOpenVINO Integration with Torch-ORT extends the support for lower precision inference through post-training quantization (PTQ) technique. Using PTQ, developers can quantize their PyTorch models with Neural Network Compression Framework (NNCF) and then run inferencing with OpenVINO Integration with Torch-ORT. Note Currently, our INT8 model support is in the early stages, only including ResNet50 and MobileNetv2. We are continuously expanding our INT8 model coverage.Docker ContainersYou can now use OpenVINO Integration with Torch-ORT on Mac OS and Windows OS through Docker. Pre-built Docker images are readily available on Docker Hub for your convenience. With a simple docker pull, you will now be able to unleash the key to accelerating performance of PyTorch models. To build the docker image yourself, you can also find dockerfiles readily available on Github.Customer storyRoboflowRoboflow empowers ISVs to build their own computer vision applications and enables hundreds of thousands of developers with a rich catalog of services, models, and frameworks to further optimize their AI workloads on a variety of different Intel hardware. An easy-to-use developer toolkit to accelerate models, properly integrated with AI frameworks, such as OpenVINO integration with Torch-ORT provides the best of both worldsan increase in inference speed as well as the ability to reuse already created AI application code with minimal changes. The Roboflow team has showcased a case study that demonstrates performance gains with OpenVINO Integration with Torch-ORT as compared to Native PyTorch for YOLOv7 model on Intel CPU. The Roboflow team is continuing to actively test OpenVINO integration with Torch-ORT with the goal of enabling PyTorch developers in the Roboflow Community.Try it outTry out OpenVINO Integration with Torch-ORT through a collection of Jupyter Notebooks. Through these sample tutorials, you will see how to install OpenVINO Integration with Torch-ORT and accelerate performance for PyTorch models with just two additional lines of code. Stay in the PyTorch framework and leverage OpenVINO optimizationsit doesn't get much easier than this.Learn moreHere is a list of resources to help you learn moreGithub RepositorySample NotebooksSupported ModelsUsage GuidePyTorch on AzureNotes1Framework configuration ONNXRuntime 1.13.1Application configuration torch_ort_infer 1.13.1, python timeit module for timing inference of modelsInput Classification models torch.Tensor; NLP models Masked sentence; OD model .jpg imageApplication Metric Average Inference latency for 100 iterations calculated after 15 warmup iterationsPlatform Tiger LakeNumber of Nodes 1 Numa NodeNumber of Sockets 1CPU or Accelerator 11th Gen Intel(R) Core(TM) i7-1185G7 @ 3.00GHzCores/socket, Threads/socket or EU/socket 4, 2 Threads/Coreucode 0xa4HT EnabledTurbo EnabledBIOS Version TNTGLV57.9026.2020.0916.1340System DDR Mem Config slots / cap / run-speed 2/32 GB/2667 MT/sTotal Memory/Node (DDR+DCPMM) 64GBStorage boot Sabrent Rocket 4.0 500GB – size 465.8GOS Ubuntu 20.04.4 LTSKernel 5.15.0-1010-intel-iotgNotices and disclaimersPerformance varies by use, configuration, and other factors. Learn more at www.Intel.com/PerformanceIndex.Performance results are based on testing as of dates shown in configurations and may not reflect all publicly available updates. See backup for configuration details. No product or component can be absolutely secure.Your costs and results may vary.Intel technologies may require enabled hardware, software, or service activation.Intel disclaims all express and implied warranties, including without limitation, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement, as well as any warranty arising from a course of performance, course of dealing, or usage in trade.Results have been estimated or simulated. Intel Corporation. Intel, the Intel logo, OpenVINO, and the OpenVINO logo are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.The post Faster inference for PyTorch models with OpenVINO Integration with Torch-ORT appeared first on Microsoft Open Source Blog.


Can not connect to SQL server in docker container ...
Category: Docker

Problem&nbsp; &nbsp; The challenge was to connect to an SQL Server Instan ...


Views: 2004 Likes: 93
IIS Web Server Create A Server Farm
Category: Servers

IIS Create A Se ...


Views: 351 Likes: 73
SQL Server Tips and Tricks
Category: SQL

Error Debugging Did you know you could double click on the SQL Error and ...


Views: 0 Likes: 44
Connect to Another Sql Data Engine Through Interne ...
Category: SQL

I had installed an SQL Data Engine and a Microsoft SQL Server Management Studio on my other computer ...


Views: 297 Likes: 79
InvalidOperationException: No service for type Mic ...
Category: .Net 7

Question How do you solve for error that says "< ...


Views: 0 Likes: 45
Get Rid Of Black Blinking Cursor MSSMS
Category: Databases

If you have a blinking big black cursor showing in Microsoft SQL Server Studio, just press "Insert" ...


Views: 379 Likes: 112
The INSERT statement conflicted with the FOREIGN K ...
Category: SQL

Question How do you resolve the error that says&nbsp;The INSERT stateme ...


Views: 0 Likes: 30
Microsoft Access Tutorials
Category: Databases

Microsoft Access T ...


Views: 356 Likes: 118
Get Rid Of Black Blinking Cursor MSSMS
Category: Databases

Question How do you remove a Bl ...


Views: 2699 Likes: 111
PowerShell From Microsoft Website
Category: Windows

Very detailed tutorials about Powershell Scripting for work automation.<a href=" ...


Views: 192 Likes: 87
Keyword or statement option 'bulkadmin' is not sup ...
Category: SQL

Question I am getting SQL Server Error Keyword or statement option 'bulkadmin' is not supported ...


Views: 0 Likes: 47
pull access denied for microsoft/mssql-server-lin ...
Category: Docker-Compose

Question Why is this error happening? " pull access denied for microsoft/mssql-server-linux, rep ...


Views: 0 Likes: 49
Towards debuggability and secure deployments of eBPF programs on Windows
Towards debuggability and secure deployments of eB ...

The eBPF for Windows runtime has introduced a new mode of operation, native code generation, which exists alongside the currently supported modes of operation for eBPF programs JIT (just-in-time compilation) and an interpreter, with the administrator able to select the mode when a program is loaded. The native code generation mode involves loading Windows drivers that contain signed eBPF programs. Due to the risks associated with having an interpreter in the kernel address space, it was decided to only enable it for non-production signed builds. The JIT mode supports the ability to dynamically generate code, write them into kernel pages, and finally set the permissions on the page from read/write to read/execute.Enter Windows Hyper-V hypervisor, a type 1 hypervisor, which has the Hypervisor-protected Code Integrity (HVCI) feature. It splits the kernel memory space into virtual trust levels (VTLs), with isolation enforced at the hardware level using virtualization extensions of the CPU. Most parts of the Windows' kernel and all drivers operate in VTL0, the lowest trusted level, with privileged operations being performed inside the Windows secure kernel operating in VTL1. During the boot process, the hypervisor verifies the integrity of the secure kernel using cryptographic signatures prior to launching it, after which the secure kernel verifies the cryptographic signature of each code page prior to enabling read/execute permissions on the page. The signatures are validated using keys obtained from X.509 certificates that chain up to a Microsoft trusted root certificate. The net effect of this policy is that if HVCI is enabled, it is no longer possible to inject dynamically generated code pages into the kernel, which prevents the use of JIT mode. Similarly, Windows uses cryptographic signatures to restrict what code can be executed in the kernel. In keeping with these principles, eBPF for Windows has introduced a new mode of execution that an administrator can choose to use that maintains the integrity of the kernel and provides the safety promises of eBPF native code generation. The process starts with the existing tool chains, whereby eBPF programs are compiled into eBPF bytecode and emitted as ELF object files. The examples below assume the eBPF-for-Windows NuGet package has been unpacked to c\ebpf and that the command is being executed from within a Developer Command Prompt for VS 2019. How to use native code generationHello_world.c// Copyright (c) Microsoft Corporation// SPDX-License-Identifier MIT#include "bpf_helpers.h"SEC("bind")intHelloWorld(){bpf_printk("Hello World!");return 0;}Compile to eBPF>clang -target bpf -O2 -Werror -Ic/ebpf/include -c hello_world.c -o out/hello_world.o>llvm-objdump -S out/hello_world.oeBPF bytecodeb7 01 00 00 72 6c 64 21 r1 = 56022949063 1a f8 ff 00 00 00 00 *(u32 *)(r10 - 8) = r118 01 00 00 48 65 6c 6c 00 00 00 00 6f 20 57 6f r1 = 8022916924116329800 ll7b 1a f0 ff 00 00 00 00 *(u64 *)(r10 - 16) = r1b7 01 00 00 00 00 00 00 r1 = 073 1a fc ff 00 00 00 00 *(u8 *)(r10 - 4) = r1bf a1 00 00 00 00 00 00 r1 = r1007 01 00 00 f0 ff ff ff r1 += -16b7 02 00 00 0d 00 00 00 r2 = 1385 00 00 00 0c 00 00 00 call 12b7 00 00 00 00 00 00 00 r0 = 095 00 00 00 00 00 00 00 exitThe next step involves a new tool introduced specifically to support this scenario bpf2c. This tool parses the supplied ELF file, extracting the list of maps and stored programs before handing off the byte code to the eBPF verifier, which proves that eBPF byte code is effectively sandboxed and constrained to terminate within a set number of instructions. The tool then performs a per-instruction translation of the eBPF byte code into the equivalent C statements and emits skeleton code used to perform relocation operations at run time. For convenience, the NuGet package also contains a PowerShell script that invokes bpf2c and then uses MSBuild to produce the final Portable Executable (PE) image, (an image format used by Windows). As an aside, the process of generating the native image is decoupled from the process of developing the eBPF program, making it a deployment time decision rather than a development time one.> powershell c\ebpf\bin\Convert-BpfToNative.ps1 hello_world.oC\Users\user\hello_world\out>powershell c\ebpf\bin\Convert-BpfToNative.ps1 hello_world.oMicrosoft (R) Build Engine version 16.9.0+57a23d249 for .NET FrameworkCopyright (C) Microsoft Corporation. All rights reserved.Build started 5/17/2022 93843 AM.Project "C\Users\user\hello_world\out\hello_world.vcxproj" on node 1 (default targets).DriverBuildNotifications Building 'hello_world_km' with toolset 'WindowsKernelModeDriver10.0' and the 'Desktop' target platform. Using KMDF 1.15.<Lines removed for clarity>Done Building Project "C\Users\user\hello_world\out\hello_world.vcxproj" (default targets).Build succeeded. 0 Warning(s) 0 Error(s)Time Elapsed 000003.57> type hello_world_driver.c// Snip Removed boiler plate driver code and map setup.static uint64_tHelloWorld(void* context){ // Prologue uint64_t stack[(UBPF_STACK_SIZE + 7) / 8]; register uint64_t r0 = 0; register uint64_t r1 = 0; register uint64_t r2 = 0; register uint64_t r3 = 0; register uint64_t r4 = 0; register uint64_t r5 = 0; register uint64_t r10 = 0; r1 = (uintptr_t)context; r10 = (uintptr_t)((uint8_t*)stack + sizeof(stack)); // EBPF_OP_MOV64_IMM pc=0 dst=r1 src=r0 offset=0 imm=560229490 r1 = IMMEDIATE(560229490); // EBPF_OP_STXW pc=1 dst=r10 src=r1 offset=-8 imm=0 *(uint32_t*)(uintptr_t)(r10 + OFFSET(-8)) = (uint32_t)r1; // EBPF_OP_LDDW pc=2 dst=r1 src=r0 offset=0 imm=1819043144 r1 = (uint64_t)8022916924116329800; // EBPF_OP_STXDW pc=4 dst=r10 src=r1 offset=-16 imm=0 *(uint64_t*)(uintptr_t)(r10 + OFFSET(-16)) = (uint64_t)r1; // EBPF_OP_MOV64_IMM pc=5 dst=r1 src=r0 offset=0 imm=0 r1 = IMMEDIATE(0); // EBPF_OP_STXB pc=6 dst=r10 src=r1 offset=-4 imm=0 *(uint8_t*)(uintptr_t)(r10 + OFFSET(-4)) = (uint8_t)r1; // EBPF_OP_MOV64_REG pc=7 dst=r1 src=r10 offset=0 imm=0 r1 = r10; // EBPF_OP_ADD64_IMM pc=8 dst=r1 src=r0 offset=0 imm=-16 r1 += IMMEDIATE(-16); // EBPF_OP_MOV64_IMM pc=9 dst=r2 src=r0 offset=0 imm=13 r2 = IMMEDIATE(13); // EBPF_OP_CALL pc=10 dst=r0 src=r0 offset=0 imm=12 r0 = HelloWorld_helpers[0].address(r1, r2, r3, r4, r5); if ((HelloWorld_helpers[0].tail_call) && (r0 == 0)) return 0; // EBPF_OP_MOV64_IMM pc=11 dst=r0 src=r0 offset=0 imm=0 r0 = IMMEDIATE(0); // EBPF_OP_EXIT pc=12 dst=r0 src=r0 offset=0 imm=0 return r0;}As illustrated here each eBPF instruction is translated into an equivalent C statement, with eBPF registers being emulated using stack variables named R0 to R10.Lastly, the tool adds a set of boilerplate code that handles the interactions with the I/O Manager required to load the code into the Windows kernel, with the result being a single C file. The Convert-BpfToNative.ps1 script then invokes the normal Windows Driver Kit (WDK) tools to compile and link the eBPF program into its final PE image. Once the developer is ready to deploy their eBPF program in a production environment that has HVCI enabled, they will need to get their driver signed via the normal driver signing process. For a production workflow, one could imagine a service that consumes the ELF file (the eBPF byte code), securely verifies that it is safe, generates the native image, and signs it before publishing it for deployment. This could then be integrated into the existing developer workflows.The eBPF for Windows runtime has been enlightened to support these eBPF programs hosted in Windows drivers, resulting in a developer experience that closely mimics the behavior of eBPF programs that use JIT. The result is a pipeline that looks like thisThe net effect is to introduce a new statically sandboxed model for Windows Drivers, with the resulting driver being signed using standard Windows driver signing mechanisms. While this additional step does increase the time needed to deploy an eBPF program, some customers have determined that the tradeoff is justified by the ability to safely add eBPF programs to systems with HVCI enabled.Diagnostics and eBPF programsOne of the key pain points of developing eBPF programs is making sure they pass verification. The process of loading programs once they have been compiled, potentially on an entirely different system, gives rise to a subpar developer experience. As part of adding support for native code generation, eBPF for Windows has integrated the verification into the build pipeline, so that developers get build-time feedback when an eBPF program fails verification.Using a slightly more complex eBPF program as an example, the developer gets a build-time error when the program fails verificationeBPF C codeThis then points the developer to line 96 of the source code, where they can see that the start time variable could be NULL.As with all other instances of code, eBPF programs can have bugs. While the verifier can prove that code is safe, it is unable to prove code is correct. One approach that was pioneered by the Linux community is the use of logging built around the bpf_printk style macro, which permits developers to insert trace statements into their eBPF programs to aid diagnosability. To both maintain compatibility with the Linux eBPF ecosystem as well as being a useful mechanism, eBPF for Windows has adopted a similar approach. One of the key differences is how these events are implemented, with Linux using a file-based approach and Windows using Event Tracing for Windows (ETW). ETW has a long history within Windows and a rich ecosystem of tools that can be used to capture and process traces.A second useful tool that is now available to developers using native-code generation is the ability to perform source-level debugging of eBPF programs. If the eBPF program is compiled with BTF data, the bpf2c tool will translate this in addition to the instructions and emit the appropriate pragmas containing the original file name and line numbers (with plans to extend this to allow the debugger to show eBPF local variables in the future). These are then consumed by the Windows Developer Kit tools and encoded into the final driver and symbol files, which the debugger can use to perform source-level debugging. In addition, these same symbol files can then be used by profiling tools to determine hot spots within eBPF programs and areas where performance could be improved.Learn moreThe introduction of support for a native image generation enhances eBPF For Windows in three areasA new mode of execution permits eBPF programs to be deployed on previously unsupported systems.A mechanism for offline verification and signing of eBPF programs.The ability for developers to perform source-level debugging of their eBPF programs.While support will continue for the existing JIT mode, this change gives developers and administrators flexibility in how programs are deployed. Separating the process of native image generation from the development of the eBPF program places the decision on how to deploy an eBPF program in the hands of the administrator and unburdens the developer from deployment time concerns.The post Towards debuggability and secure deployments of eBPF programs on Windows appeared first on Microsoft Open Source Blog.


StreamJsonRpc.RemoteInvocationException: Cannot fi ...
Category: Tools

Question There is a warning ale ...


Views: 0 Likes: 53
Back-End Development
Category: Computer Programming

The&nbsp;<span class="c26 c ...


Views: 0 Likes: 32
How do you use CONCAT_WS function in T-SQL
Category: Research

Title Mastering String Concatenation in T-SQL with CONCAT_WS FunctionIn the realm ...


Views: 0 Likes: 35
Error 0xc02020a1: Data Flow Task 1: Data conversio ...
Category: SQL

Problem Error 0xc02020a1 Data Flow Task 1 Data conversion failed. The data conversion ...


Views: 1523 Likes: 100
Cannot consume scoped service Microsoft.AspNetCore ...
Category: .Net 7

Question How do you inject RoleManager in Asp.Net 6 Dependency Injection Container, when I do am ...


Views: 0 Likes: 51
Error 0xc0202009: Data Flow Task 1: SSIS Error Co ...
Category: SQL

Question How do you solve for this error?&nbsp; Error 0xc0202009 Data ...


Views: 0 Likes: 54
Understanding Sequences in Transact-SQL (SQL Study ...
Category: SQL

SQL Study Materi ...


Views: 478 Likes: 89
How to create SQL file in one command on Windows
Category: SQL

Have you ever wondered how to create a sQL file on one command. I found myself needing to create ...


Views: 0 Likes: 39
SQL 0x80004005  Description: "Cannot continue the ...
Category: SQL

Question How do you solve for t ...


Views: 0 Likes: 42
Full Stack Software Developer
Category: Jobs

We have an opening for a Full Stack Software Developer. Please send resumes asap for our team to ...


Views: 0 Likes: 76
Microsoft Unit Test does not discover a Unit Test ...
Category: Technology

Problem Creating a Unit Test for a Web API can be complicated sometimes, a lot of things could g ...


Views: 215 Likes: 82
How do you perform Math in SQL
Category: Research

Math in SQL is an essential skill for anyone working with databases. It allows you to manipulate ...


Views: 0 Likes: 28
Faster inference for PyTorch models with OpenVINO Integration with Torch-ORT
Faster inference for PyTorch models with OpenVINO ...

Deep learning models are everywhere without us even realizing it. The number of AI use cases have been increasing exponentially with the rapid development of new algorithms, cheaper compute, and greater access to data. Almost every industry has deep learning applications, from healthcare to education to manufacturing, construction, and beyond. Many developers opt to use popular AI Frameworks like PyTorch, which simplifies the process of analyzing predictions, training models, leveraging data, and refining future results.PyTorch on AzureGet an enterprise-ready PyTorch experience in the cloud.Learn morePyTorch is a machine learning framework used for applications such as computer vision and natural language processing, originally developed by Meta AI and now a part of the Linux Foundation umbrella, under the name of PyTorch Foundation. PyTorch has a powerful, TorchScript-based implementation that transforms the model from eager to graph mode for deployment scenarios.One of the biggest challenges PyTorch developers face in their deep learning projects is model optimization and performance. Oftentimes, the question arises How can I improve the performance of my PyTorch models? As you might have read in our previous blog, Intel and Microsoft have joined hands to tackle this problem with OpenVINO Integration with Torch-ORT. Initially, Microsoft had released Torch-ORT, which focused on accelerating PyTorch model training using ONNX Runtime. Recently, this capability was extended to accelerate PyTorch model inferencing by using the OpenVINO toolkit on Intel central processing unit (CPU), graphical processing unit (GPU), and video processing unit (VPU) with just two lines of code.Figure 1 OpenVINO Integration with Torch-ORT Application Flow. This figure shows how OpenVINO Integration with Torch-ORT can be used in a Computer Vision Application.By adding just two lines of code, we achieved 2.15 times faster inference for PyTorch Inception V3 model on an 11th Gen Intel Core i7 processor1. In addition to Inception V3, we also see performance gains for many popular PyTorch models such as ResNet50, Roberta-Base, and more. Currently, OpenVINO Integration with Torch-ORT supports over 120 PyTorch models from popular model zoo's, like Torchvision and Hugging Face.Figure 2 FP32 Model Performance of OpenVINO Integration with Torch-ORT as compared to PyTorch. This chart shows average inference latency (in milliseconds) for 100 runs after 15 warm-up iterations on an 11th Gen Intel(R) Core (TM) i7-1185G7 @ 3.00GHz.FeaturesOpenVINO Integration with Torch-ORT introduces the following featuresInline conversion of static/dynamic input shape modelsGraph partitioningSupport for INT8 modelsDockerfiles/Docker ContainersInline conversion of static/dynamic input shape modelsOpenVINO Integration with Torch-ORT performs inferencing of PyTorch models by converting these models to ONNX inline and subsequently performing inference with OpenVINO Execution Provider. Currently, both static and dynamic input shape models are supported with OpenVINO Integration with Torch-ORT. You also have the ability to save the inline exported ONNX model using the DebugOptions API.Graph partitioningOpenVINO Integration with Torch-ORT supports many PyTorch models by leveraging the existing graph partitioning feature from ONNX Runtime. With this feature, the input model graph is divided into subgraphs depending on the operators supported by OpenVINO and the OpenVINO-compatible subgraphs run using OpenVINO Execution Provider and unsupported operators fall back to MLAS CPU Execution Provider.Support for INT8 modelsOpenVINO Integration with Torch-ORT extends the support for lower precision inference through post-training quantization (PTQ) technique. Using PTQ, developers can quantize their PyTorch models with Neural Network Compression Framework (NNCF) and then run inferencing with OpenVINO Integration with Torch-ORT. Note Currently, our INT8 model support is in the early stages, only including ResNet50 and MobileNetv2. We are continuously expanding our INT8 model coverage.Docker ContainersYou can now use OpenVINO Integration with Torch-ORT on Mac OS and Windows OS through Docker. Pre-built Docker images are readily available on Docker Hub for your convenience. With a simple docker pull, you will now be able to unleash the key to accelerating performance of PyTorch models. To build the docker image yourself, you can also find dockerfiles readily available on Github.Customer storyRoboflowRoboflow empowers ISVs to build their own computer vision applications and enables hundreds of thousands of developers with a rich catalog of services, models, and frameworks to further optimize their AI workloads on a variety of different Intel hardware. An easy-to-use developer toolkit to accelerate models, properly integrated with AI frameworks, such as OpenVINO integration with Torch-ORT provides the best of both worldsan increase in inference speed as well as the ability to reuse already created AI application code with minimal changes. The Roboflow team has showcased a case study that demonstrates performance gains with OpenVINO Integration with Torch-ORT as compared to Native PyTorch for YOLOv7 model on Intel CPU. The Roboflow team is continuing to actively test OpenVINO integration with Torch-ORT with the goal of enabling PyTorch developers in the Roboflow Community.Try it outTry out OpenVINO Integration with Torch-ORT through a collection of Jupyter Notebooks. Through these sample tutorials, you will see how to install OpenVINO Integration with Torch-ORT and accelerate performance for PyTorch models with just two additional lines of code. Stay in the PyTorch framework and leverage OpenVINO optimizationsit doesn't get much easier than this.Learn moreHere is a list of resources to help you learn moreGithub RepositorySample NotebooksSupported ModelsUsage GuidePyTorch on AzureNotes1Framework configuration ONNXRuntime 1.13.1Application configuration torch_ort_infer 1.13.1, python timeit module for timing inference of modelsInput Classification models torch.Tensor; NLP models Masked sentence; OD model .jpg imageApplication Metric Average Inference latency for 100 iterations calculated after 15 warmup iterationsPlatform Tiger LakeNumber of Nodes 1 Numa NodeNumber of Sockets 1CPU or Accelerator 11th Gen Intel(R) Core(TM) i7-1185G7 @ 3.00GHzCores/socket, Threads/socket or EU/socket 4, 2 Threads/Coreucode 0xa4HT EnabledTurbo EnabledBIOS Version TNTGLV57.9026.2020.0916.1340System DDR Mem Config slots / cap / run-speed 2/32 GB/2667 MT/sTotal Memory/Node (DDR+DCPMM) 64GBStorage boot Sabrent Rocket 4.0 500GB – size 465.8GOS Ubuntu 20.04.4 LTSKernel 5.15.0-1010-intel-iotgNotices and disclaimersPerformance varies by use, configuration, and other factors. Learn more at www.Intel.com/PerformanceIndex.Performance results are based on testing as of dates shown in configurations and may not reflect all publicly available updates. See backup for configuration details. No product or component can be absolutely secure.Your costs and results may vary.Intel technologies may require enabled hardware, software, or service activation.Intel disclaims all express and implied warranties, including without limitation, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement, as well as any warranty arising from a course of performance, course of dealing, or usage in trade.Results have been estimated or simulated. Intel Corporation. Intel, the Intel logo, OpenVINO, and the OpenVINO logo are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.The post Faster inference for PyTorch models with OpenVINO Integration with Torch-ORT appeared first on Microsoft Open Source Blog.


VBA Microsoft Application Libraries
Category: C-Sharp

Nowadays it nearly impossible to avoid Microsoft's products. Therefore, it is always helpful to lear ...


Views: 252 Likes: 100
Add Link Server to Sql sever
Category: SQL

<a href="https//docs.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sp-addli ...


Views: 299 Likes: 102
Introducing Bash for Beginners
Introducing Bash for Beginners

A new Microsoft video series for developers learning how to script.According to Stack Overflow 2022 Developer Survey, Bash is one of the top 10 most popular technologies. This shouldn't come as a surprise, given the popularity of using Linux systems with the Bash shell readily installed, across many tech stacks and the cloud. On Azure, more than 50 percent of virtual machine (VM) cores run on Linux. There is no better time to learn Bash!Long gone are the days of feeling intimidated by a black screen with text known as a terminal. Say goodbye to blindly typing in “chmod 777” while following a tutorial. Say hello to task automation, scripting fundamentals, programming basics, and your first steps to working with a cloud environment via the bash command line.What we’ll be coveringMy cohost, Josh, and I will walk you through everything you need to get started with Bash in this 20-part series. We will provide an overview of the basics of Bash scripting, starting with how to get help from within the terminal. The terminal is a window that lets you interact with your computer’s operating system, and in this case, the Bash shell. To get help with a specific command, you can use the man command followed by the name of the command you need help with. For example, man ls will provide information on the ls command, which is used for listing directories and finding files.Once you’ve gotten help from within the terminal, you can start navigating the file system. You’ll learn how to list directories and find files, as well as how to work with directories and files themselves. This includes creating, copying, moving, and deleting directories and files. You’ll also learn how to view the contents of a file using the cat command.Another important aspect of Bash is environment variables. These are values that are set by the operating system and are used by different programs and scripts. In Bash, you can access these variables using the “$” symbol followed by the name of the variable. For example, $PATH will give you the value of the PATH environment variable, which specifies the directories where the shell should search for commands.Redirection and pipelines are two other important concepts in Bash. Redirection allows you to control the input and output of a command, while pipelines allow you to chain multiple commands together. For example, you can use the “>” symbol to redirect the output of a command to a file, and the “|” symbol to pipe the output of one command to the input of another.When working with files in Linux, you’ll also need to understand file permissions. In Linux, files have permissions that determine who can access them and what they can do with them. You’ll learn about the different types of permissionssuch as read, write, and execute, and how to change them using the chmod command.Next, we’ll cover some of the basics of Bash scripting. You’ll learn how to create a script, use variables, and work with conditional statements, such as "if" and "if else". You’ll also learn how to use a case statement, which is a way to control the flow of execution based on the value of a variable. Functions are another important aspect of Bash scripting, and you’ll learn how to create and use them to simplify your scripts. Finally, you’ll learn about loops, which allow you to repeat a set of commands multiple times.Why Bash mattersBash is a versatile and powerful language that is widely used. Whether you’re looking to automate tasks, manage files, or work with cloud environments, Bash is a great place to start. With the knowledge you’ll gain from this series, you’ll be well on your way to becoming a proficient Bash scripter.Many other tools like programming languages and command-line interfaces (CLIs) integrate with Bash, so not only is this the beginning of a new skill set, but also a good primer for many others. Want to move on and learn how to become efficient with the Azure CLI? Bash integrates with the Azure CLI seamlessly. Want to learn a language like Python? Learning Bash teaches you the basic programming concepts you need to know such as flow control, conditional logic, and loops with Bash, which makes it easier to pick up Python. Want to have a Linux development environment on your Windows device? Windows Subsystem for Linux (WSL) has you covered and Bash works there, too!While we won't cover absolutely everything there is to Bash, we do make sure to leave you with a solid foundation. At the end of this course, you'll be able to continue on your own following tutorials, docs, books, and other resources. If live is more your style, catch one of our How Linux Works and How to leverage it in the Cloud Series webinars. We'll cover a primer on How Linux Works, discuss How and when to use Linux on Azure, and get your developer environment set up with WSL.This Bash for Beginners series is part of a growing library of video series on the Microsoft Developer channel looking to quickly learn new skills including Python, Java, C#, Rust, JavaScript and more.Learn more about Bash in our Open Source communityNeed help with your learning journey?Watch Bash for Beginners Find Josh and myself on Twitter. Share your questions and progress on our Tech Community, we'll make sure to answer and cheer you on. The post Introducing Bash for Beginners appeared first on Microsoft Open Source Blog.


Microsoft SQL Server, Error: 258
Category: SQL

Error A network-related or instance-specific error occurred while establishing ...


Views: 492 Likes: 102
Performant on-device inferencing with ONNX Runtime
Performant on-device inferencing with ONNX Runtime

As machine learning usage continues to permeate across industries, we see broadening diversity in deployment targets, with companies choosing to run locally on-client versus cloud-based services for security, performance, and cost reasons. On-device machine learning model serving is a difficult task, especially given the limited bandwidth of early-stage startups. This guest post from the team at Pieces shares the problems and solutions evaluated for their on-device model serving stack and how ONNX Runtime serves as their backbone of success.Local-first machine learningPieces is a code snippet management tool that allows developers to save, search, and reuse their snippets without interrupting their workflow. The magic of Pieces is that it automatically enriches these snippets so that they're more useful to the developer after being stored in Pieces. A large part of this enrichment is driven by our machine learning models that provide programming language detection, concept tagging, semantic description, snippet clustering, optical character recognition, and much more. To enable full coverage of the developer workflow, we must run these models from the desktop, terminal, integrated development environment, browser, and team communication channels.Like many businesses, our first instinct was to serve these models as cloud endpoints; however, we realized this wouldn't suit our needs for a few reasons. First, in order to maintain a seamless developer workflow, our models must have low latency. The round trip to the server is lost time we can't afford. Second, our users are frequently working with proprietary code, so privacy is a primary concern. Sending this data over the wire would expose it to potential attacks. Finally, hosting models on performant cloud machines can be very expensive and is an unnecessary cost in our opinion. We firmly believe that advances in modern personal hardware can be taken advantage of to rival or even improve upon the performance of models on virtual machines. Therefore, we needed an on-device model serving platform that would provide us with these benefits while still giving our machine learning engineers the flexibility that cloud serving offers. After some trial and error, ONNX Runtime emerged as the clear winner.Our ideal machine learning runtimeWhen we set out to find the backbone of our machine learning serving system, we were looking for the following qualitiesEasy implementationIt should fit seamlessly into our stack and require minimal custom code to implement and maintain. Our application is built in Flutter, so the runtime would ideally work natively in the Dart language so that our non-machine learning engineers could confidently interact with the API.BalancedAs I mentioned above, performance is key to our success, so we need a runtime that can spin up and perform inference lightning fast. On the other hand, we also need useful tools to optimize model performance, ease model format conversion, and generally facilitate the machine learning engineering processes.Model coverageIt should support the vast majority of machine learning model operators and architectures, especially cutting-edge models, such as those in the transformer family.TensorFlow LiteOur initial research revealed three potential options TensorFlow Lite, TorchServe, and ONNX Runtime. TensorFlow Lite was our top pick because of how easy it would be to implement. We found an open source Dart package which provided Dart bindings to the TensorFlow Lite C API out-of-the-box. This allowed us to simply import the package and immediately have access to machine learning models in our application without worrying about the lower-level details in C and C++.The tiny runtime offered great performance and worked very well for the initial models we tested in production. However, we quickly ran into a huge blocker converting other model formats to TensorFlow Lite is a pain. Our first realization of this limitation came when we tried and failed to convert a simple PyTorch LSTM to TensorFlow Lite. This spurred further research into how else we might be limited. We found that many of the models we planned to work on in the future would have to be trained in TensorFlow or Keras because of conversion issues. This was problematic because we've found that there's not a one-size-fits-all machine learning framework. Some are better suited for certain tasks, and our machine learning engineers differ in preference and skill level for each of these frameworksunfortunately, we tend to favor PyTorch over TensorFlow.This issue was then compounded by the fact that TensorFlow Lite only supports a subset of the machine learning operators available in TensorFlow and Kerasimportantly, it lags in more cutting-edge operators that are required in new, high-performance architectures. This was the final straw for us with TensorFlow Lite. We were looking to implement a fairly standard transformer-based model that we'd trained in TensorFlow and found that the conversion was impossible. To take advantage of the leaps and bounds made in large language models, we needed a more flexible runtime.TorchServeHaving learned our lesson on locking ourselves into a specific training framework, we opted to skip testing out TorchServe so that we would not run into the same conversion issues.ONNX Runtime saves the dayLike TensorFlow Lite, ONNX Runtime gave us a lightweight runtime that focused on performance, but where it really stood out was the model coverage. Being built around the ONNX format, which was created to solve interoperability between machine learning tools, it allowed our machine learning engineers to choose the framework that works best for them and the task at hand and have confidence that they would be able to convert their model to ONNX in the end. This flexibility brought more fluidity to our research and development process and reduced the time spent preparing new models for release.Another large benefit of ONNX Runtime for us is a standardized model optimization pipeline, truly becoming the “balanced” tool we were looking for. By serving models in a single format, we're able to iterate through a fixed set of known optimizations until we find the desired speed, size, and accuracy tradeoff for each model. Specifically, for each of our ONNX models, the last step before production is to apply different levels of ONNX Runtime graph optimizations and linear quantization. The ease of this process is a quick win for us every time.Speaking of feature-richness, a final reason that we chose ONNX Runtime was that the baseline performance was good but there were many options we could implement down the road to improve performance. Due to the way we currently build our app, we have been limited to the vanilla CPU builds of ONNX Runtime. However, an upcoming modification to our infrastructure will allow us to utilize execution providers to serve optimized versions of ONNX Runtime based on a user's CPU and GPU architecture. We also plan to implement dynamic thread management as well as IOBinding for GPU-enabled devices.Production workflowNow that we've covered our reasoning for choosing ONNX Runtime, we'll do a brief technical walkthrough of how we utilize ONNX Runtime to facilitate model deployment.Model conversionAfter we've finished training a new model, our first step towards deployment is getting that model into an ONNX format. The specific conversion approach depends on the framework used to train the model. We have successfully used the conversion tools supplied by HuggingFace, PyTorch, and TensorFlow.Some model formats are not supported by these conversion tools, but luckily ONNX Runtime has its own internal conversion utilities. We recently used these tools to implement a T5 transformer model for code description generation. The HuggingFace model uses a BeamSearch node for text generation that we were only able to convert to ONNX using ONNX Runtime's convert generation.py tool, which is included in their transformer utilities.ONNX model optimizationOur first optimization step is running the ONNX model through all ONNX Runtime optimizations, using GraphOptimizationLevel.ORT_ENABLE_ALL, to reduce model size and startup time. We perform all these optimizations offline so that our ONNX Runtime binary doesn't have to perform them on startup. We are able to consistently reduce model size and latency very easily with this utility.Our second optimization step is quantization. Again, ONNX Runtime provides an excellent utility for this. We've used both quantize_dynamic() and quantize_static() in production, depending on our desired balance of speed and accuracy for a specific model.InferenceOnce we have an optimized ONNX model, it's ready to be put into production. We've created a thin wrapper around the ONNX Runtime C++ API which allows us to spin up an instance of an inference session given an arbitrary ONNX model. We based this wrapper on the onnxruntime-inference-examples repository. After developing this simple wrapper binary, we were able to quickly get native Dart support using the Dart FFI (Foreign Function Interface) to create Dart bindings for our C++ API. This reduces the friction between teams at Pieces by allowing our Dart software engineers to easily inject our machine learning efforts into all of our services.ConclusionOn-device machine learning requires a tool that is performant yet allows you to take full advantage of the current state-of-the-art machine learning models. ONNX Runtime gracefully meets both needs, not to mention the incredibly helpful ONNX Runtime engineers on GitHub that are always willing to assist and are constantly pushing ONNX Runtime forward to keep up with the latest trends in machine learning. It's for these reasons that we at Pieces confidently rest our entire machine learning architecture on its shoulders.Learn more about ONNX RuntimeONNX Runtime Tutorials.Video tutorials for ONNX Runtime.The post Performant on-device inferencing with ONNX Runtime appeared first on Microsoft Open Source Blog.


MySQL Backup and Restore Commands for DBA
Category: SQL

<span style="font-weight bold; font-size large; textline underli ...


Views: 354 Likes: 102
Docker Container Micro-Service Error: Can not Conn ...
Category: Docker

Problem Can not Connect to SQL Server in Docker Container from Microsoft Sql Server Management</ ...


Views: 257 Likes: 90
Updated First Responder Kit and Consultant Toolkit for June 2023
Updated First Responder Kit and Consultant Toolkit ...

This one’s a pretty quiet release just bug fixes in sp_Blitz, sp_BlitzLock, and sp_DatabaseRestore. Wanna watch me use it? Take the class. To get the new version Download the updated FirstResponderKit.zip Azure Data Studio users with the First Responder Kit extension ctrl/command+shift+p, First Responder Kit Import. PowerShell users run Install-DbaFirstResponderKit from dbatools Get The Consultant Toolkit to quickly export the First Responder Kit results into an easy-to-share spreadsheet Consultant Toolkit Changes I updated it to this month’s First Responder Kit, but no changes to querymanifest.json or the spreadsheet. If you’ve customized those, no changes are necessary this month just copy your spreadsheet and querymanifest.json into the new release’s folder. sp_Blitz Changes Fix update unsupported SQL Server versions list. Time marches on, SQL Server 2016 SP2. (#3274, thanks Michel Zehnder and sm8680.) Fix if you ran sp_Blitz in databases other than master, we weren’t showing the alerts on TDE certificates that haven’t been backed up recently. (#3278, thanks ghauan.) sp_BlitzLock Changes Enhancement compatibility with Azure Managed Instances. (#3279, thanks Erik Darling.) Fix convert existing output tables to larger data types. (#3277, thanks Erik Darling.) Fix don’t send output to client when writing it to table. (#3276, thanks Erik Darling.) sp_DatabaseRestore Changes Improvement new @FixOrphanUsers parameter. When 1, once restore is complete, sets database_principals.principal_id to the value of server_principals.principal_id where database_principals.name = server_principals.name. (#3267, thanks Rebecca Lewis.) Fix better handling of last log files for split backups when using @StopAt. (#3269, thanks Rebecca Lewis.) Fix corrected regression introduced in 8.11 that caused non-striped backups to no longer be deleted. (#3262, thanks Steve the DBA.) For Support When you have questions about how the tools work, talk with the community in the #FirstResponderKit Slack channel. Be patient it’s staffed by volunteers with day jobs. If it’s your first time in the community Slack, get started here. When you find a bug or want something changed, read the contributing.md file. When you have a question about what the scripts found, first make sure you read the “More Details” URL for any warning you find. We put a lot of work into documentation, and we wouldn’t want someone to yell at you to go read the fine manual. After that, when you’ve still got questions about how something works in SQL Server, post a question at DBA.StackExchange.com and the community (that includes me!) will help. Include exact errors and any applicable screenshots, your SQL Server version number (including the build #), and the version of the tool you’re working with.


Microsoft Unit Test Project Won't Run in C# Applic ...
Category: Technology

How to Set Up Microsoft UnitTest in Visual Studio 2019 to Test Asp.Net Core Application</ ...


Views: 264 Likes: 70

Login to Continue, We will bring you back to this content 0



For peering opportunity Autonomouse System Number: AS401345 Custom Software Development at ErnesTech Email Address[email protected]