Press "Enter" to skip to content

Select Indistinct

Disable powershell update nag in one line

[System.Environment]::SetEnvironmentVariable("POWERSHELL_UPDATECHECK",0,[System.EnvironmentVariableTarget]::User)

To be clear – I think you should be updating your PowerShell regularly, however the HUGE WHITE BLOCK ACROSS MY ENTIRE SCREEN EVERY TIME I LAUNCH VISUAL STUDIO CODE ISN’T GREAT.

Hated that caps? Yeah, that’s basically my eyes every time I see this nag window inverting the colors across my ennntiiirrreee screen.

I checked the PS repo and some one liners posted didnt work (and used SetEnvironmentVariableTarget which was not a method I had?), so I wanted to make this easy in case you are getting frustrated with the PowerShell update version check message and you want it to go away and didnt want to crack open the environment variables.

Now go update your PowerShell 🙂

Awesome tools: Papa Parse

My most recent problem for my side project was implementing a drag and drop upload for CSV(character separated value) data.

From experience I know that CSV data can fairly easily be malformed so I wanted to be able to present a nice formatted list for a user to be able to preview and opt out of certain items.

I started reading and writing some code to take a dropped file (which turns out to be straight forward in HTML5) and call some sort of preview function with it.

After a few frustrating hours and an active disinterest in node modules, I found a tool that really solved the problem right for me: Papa Parse.

Somewhere between a spreadsheet and a set of buttons I built a simple CSV preview.

As I would qualify my JavaScript as “learning level”, one of the big wins for me is any tool which solves my problem in a sane and reusable way without going off the deep end into JS land.

This was the entire set of code I needed to take a file object and return a useful set of rows.

let config = {
 delimiter: "", // auto-detect
 newline: "", // auto-detect
 quoteChar: '"',
 escapeChar: '"',
 header: true,
 skipEmptyLines: true,
 delimitersToGuess: [',', '\t', '|', ';', Papa.RECORD_SEP, Papa.UNIT_SEP] }

let useful_data = Papa.parse(data, config = config);

Shout out to jQuery for the rest – on finishing the load call a function to draw a form with some checkboxes, and a submission form for them all.

Useful Django Bits

I have been busy working on some other non-SQL related side projects recently, and I wanted to note some of the pieces of code I have been appreciating recently.

https://github.com/PaesslerAG/django-currentuser is a simple plugin that allows you to reference your current user context in your models various functions. This greatly simplified some user management functions within my codebase, as I could express it all in the model.

https://www.django-rest-framework.org/ is a powerful framework on top of Django that allows you to build a straightforward rest framework. Django doesn’t have object based permissions and I have been building out the next version of my codebase with it, its definitely a lot more pluggable than anything I designed.

https://pythoncircle.com/post/439/server-access-logging-in-django-using-middleware/ an easy way to track user access – one migration adds a log to your table, and you get whatever you want out of each request flow. Be careful that you follow your GDPR/CCPA guidelines!

https://github.com/pennersr/django-allauth is something I have been investigating but it seems a bit much for my goals, I will come back and update more about this soon.

First glance at PowerBI

A simple problem I think plenty of people are having – can you take our data and whip up a simplified BI tool UI. Today I am reviewing PBI.

The PBI Service allows you to publish your reports with the PBI Desktop tool, which requires you to setup an embedded PBI service in your azure account.

To get data in PBI is straightforward and many data sources are acceptable – I am tempted to jump in with the SQL DB option, but as this is all prototype stages I stuck with the tried and true Excel. I know that this means I skip over the data load issue and the like, but that’s not the purpose of this activity.

As we are looking at an analytical dataset I created a calendar tab, a dimension tab, and two fact tabs with dummy data.

The PBI import process is pretty smooth on the first attempt but you’ll likely want to do some cleanup. Click on the modeling tab and verify its inferred your relations in the way you expect, or add a few of your own.

As the data was imported from Excel I noticed that the Sigma symbol was missing from many of the columns (a sign you can use the value in summary calculations) – because it was a numeric field with NULLs, and in the import process it was decided that the field was a string field.

For the purposes of this demo it was fine to switch from NULL to 0 values, so I updated the data in the sheet, updated the type in the model, and refreshed the data – no problem. This may not be the best solution if the zero is meaningful from NULL in your calculations.

I experimented with the various built in visuals and noted that there’s some pretty good third party “verified” visuals as well. I would say overall most of the visualizations are very straightforward to use with the one exception of the pie chart sub-groupings being the same colors.

Once I had published my starter visual, I immediately wanted to go back and make revisions, and the desktop client made that fairly easy until I wanted to remove a column. Removing a reference is seemingly verboten in PBI, and it would not merge the changes.

It looks like whenever you want to make a breaking change you might have a bit more GUI work to do, bringing me back to SSIS woes, and I didn’t see any nice ways to “update all references” or something like that, so this might be a tedious step if you are making many changes.

Other things that struck me:

The fact that there’s a desktop app (windows only) for PBI publishing is a bit of a pain at the moment, but it was quick at manipulating the various visualizations – its just weird that its clearly some electron app compromise (so why not just the web?)

Drilling into data, exporting data, analyzing data – these things are super nice compared to any previous reporting system I have worked with.

It was disappointing to learn that as a Pro PBI user you cant share with other users without a Pro license (unless you pony up 4995 a month to start.) It’s not a big deal, but hopefully one day there’s a usage based use case.

Creating dashboards and apps is confusing – you cant see a reference to anything unless you first go to a workspace, then you can create a dashboard from elements of a report. I don’t understand why this is so fragmented – just let me manage this in either the desktop client or at least make it available in the top level UI.

Grouping groups of strings within strings in TSQL.

So a friend of mine had a query puzzle – he needed to print some W2 forms but the forms themselves only allows up to 4 groups of a value to be placed in a box or else they’d need to issue two forms.

The pickle is that the source data is freeform csv text, so what can we do on the database side?

I reached for my handy CROSS APPLY, a numbers table, windowing functions and a little modulo arithmetic – its not so bad to return a subgroup within a group in SQL as long as you are willing to lay out additional columns to count your grouping.

Wrapping my head around object level permissions with Django

A quick aside, if you read my SQL stuff – I have been working on a side project with python, Django (a web framework), with a focus on bringing some sanity to recipes.

For the site I needed to implement a system where a user owns recipes, and other users cant modify them.

In Django there’s a built in auth model that supports both user registration and model level permissions(yay!), but it means that naively all users could edit all recipes(boo!)

I did some quick searching because I was hopeful that I could reuse some of the well thought out components already available.

The big options I found were django-guardian, Rules, django-rest, or figuring it out myself.

Checking in with Rules, it looks like its not so much a Django specific implementation as much as a way to implement rules + predicates to get things done, seems interesting but its not clear from the older examples provided how its supposed to work with a modern Django – I am going to skip it for now.

Checking in with django-guardian, I am seeing better examples that are relevant, but truthfully I need 1 owner as a permission.

On the face of it, I feel like a lot of these systems are really complex for my purposes – its cool to be the ultimate system but I just need a system that provides:

  1. Users which have permission to objects.
  2. Permissions means you have all roles on the object (edit/delete.)
  3. Each object that is ownable on creation must have a creator/author, who when its assigned can automatically edit/delete the thing.

Since I don’t need a complex permission system, I don’t care about meta-magic, and I just want to design a straight forward set of gates to manage, I decided to go off on my own and work on implementing an author that’s checked.

I added an author field to my model, the current user sets it in the model if its not already set, and then I use the is_current_authenticated_user() function from the django-currentuser package to tell if this matches the current request.

I also implemented a simple map behind the scenes that connects forms to models to pages so that the add/edit functions can just do their work and pass the correct set of forms back to the correct set of pages – I will have to keep looking because it wasnt clear how to infer the model’s form class automatically without some mightier python than I current posses.

I disable the form fields for niceties in the UI if you don’t own the object (so you dont accidentally try and edit it), but to be sure I also prevent saving on the backend if things don’t match up, and there’s an added bonus of free CSRF built into Django.

And so after all that I have straight forward solution without even touching the rules code – it took me more time to noodle over third party options than just set down and write it.

Recipe Scraping with Python

I am working on a new project with the aim of scraping a wide variety of data sources, with goals as various as arrest data and recipe websites.

I am going to catalog the project’s process through the various issues, and try to encapsulate as much as possible so that if there are components other’s want to use they can.

Here’s the process I am taking:

  1. Decide on a language.
    • What languages are well supported for web scraping?
    • Plenty, but of them I am the most familiar with python.
  2. Decide on feature set.
    • We are going with a headless HTML scraper for now, javascript will be supported in future versions.
    • We need to populate some basic datapoints for a recipe, but we’ll progressively enhance sites that don’t have the things we want.
  3. Build initial targeted list of sites to scrape
    • This was done by my partner in crime, more on that later.
    • >200 sites listed to target specific delicious recipes.
  4. Find some useful resources for scraping sites with python.
  5. Construct simple loop for testing/validatign data.

This Sunday’s execution:

  1. Get an isolated VM up.
  2. Install python3 and python3-venv on debian.
  3. Find there’s actually a few good tools which implement this in a more general way: https://github.com/hhursev/recipe-scrapers
  4. Test that out on a specific site (sorry, none of those links just yet!)
  5. Find that recipe-scrapers actually supports it… automagically, even though its not on their supported site list.
  6. Well… that was easy.

So I didn’t expect a great package to be available out of the box for the problem at hand, so kudos to these wonderful devs.

For my next post I will test combining spidering and downloading to create a cohesive “cookbook” from a target site.

Converting TIMESTAMP to VARCHAR in TSQL or Replacing fn_varbintohexstr

Just a quick one in case I forget – I had a scenario where I needed to replace fn_varbintohexstr in converting an existing product to something more Azure SQL DB friendly.

I did some googling and didn’t like the results, especially since most of them are returning NVARCHAR(MAX) instead of what they generally should – just VARCHAR(length of data) and in the case of my needs (timestamps) the value was always 18 characters.

The key was finding the conversion to binary first, and then the use of the ,1 param for convert.

select convert(varchar(18), convert(varbinary(18), row_version), 1)

That’s all folks.

Dynamic UNPIVOT Reporting on Cardinality.

Today I was diving into some extremely wide tables, I wanted to take a quick look at things like “How many unique values does this table have in every column?”.

This can be super useful if you have a spreadsheet of results or a schema without effective normalization and you want to determine which rows are the “most unique” – or have high cardinality.

The Github gist is embedded at the bottom of the page, but I will run you through the code in case you want an explanation of how it works

The procedure takes a schema and a table/view name and combines them – I could ask the user to do this for me, but there’s parts where its useful to break them up and I dont want to mess with split logic :p

The procedure is defined as a temporary stored procedure as I didnt want to leave a trace when I disconnect.

I then define the dynamic UNPIVOT statement, which we will feed our dynamic column listing from.

Gather our columns (twice) – once for the COUNT(DISTINCT), and once for the use in the columns we want to UNPIVOT.

Here’s an example of running it against a tsqlt testresult table:


As we can see, the tsqlt testresult table has one class, two messages and results, and unique everything else (so that’s where I will focus.)

I also utilize a @magic_value variable in the code (not shown) which I use to deal with the fact that a NULL value in an UNPIVOT statement wont count as an occurrence. I want to disambiguate from NULL and any particular value that might occur, so using something like -1 or some string NULL would be inappropriate.

That’s it for today!

Grab the full code below –

Postgres Plugin Support Added in Azure Data Studio!!

I know I have been writing a lot about ADS recently, but this is even bigger than the Notebook announcement.

A Postgres plugin has been announced in the insider release of ADS, and it just works!

If the term Postgres is unfamiliar – PostgreSQL is one of the preeminent open source database solutions and is showing wide adoption due to its quality and of course, price.

For those of you in the Postgres world, you already have probably bounced between a few SQL clients and if you are like me, not been totally impressed with the results.

For those of you who are SQL Server Data professionals and live in the comfortable world of SSMS and ADS, you may not know that a significant part of the world still runs queries on the command line, or is forced to rebuild their own tool to meet their needs in each language de jour.

I have been working on some side projects with Docker, Postgres, and a few other pieces of tech, so this announcement was perfect for me to test it out.

Configuration

Just like my post on Notebooks, I am going to download the most recent release from the Insider’s build, but I am going to keep the screenshots a little more brief.

Make sure you enable those preview features!

Jump to the extensions section and find the Postgres plugin. (Sidebar button)

I used a local connection to a docker container hosting my Postgres instance and Azure Data Studio connected the first try!

Intellisense works great!

Notebooks instantly inferred my context from the Postgres connection! Sweet!

Even the explain button pulls the query plan 😀

I love this direction for Azure Data Studio and I hope we see more open source databases supported in the future!