Getting Data In

Help to remove brackets and commas from data, sort into a CSV, and dedup

bpolsen
Explorer

I have data which looks like the following:

[000003074859, 000003075752, 000003224575, 000003228286, 000003235217, 000003246379, 000003246434, 000003246725, 000003246934, 000003248574]
[0010242946, 0002363081, 000006459131, 0010275565, 000000430019, 000000465470, 000000465546, 000003228900, 000003616661, 000003648249]

I would like to:

1) Remove the brackets and commas
2) Change the CSV into one row per entry
3) dedup

So, the final data would look like:
000003074859
000003075752
000003224575
....................

Thank you very much in advance

1 Solution

niketn
Legend

Following is a run-anywhere search based on your example. Pipes till | table data generates mock data. You would require the pipes after that. Also replace with your field name instead of data

| makeresults
| eval data="[000003074859, 000003075752, 000003224575, 000003228286, 000003235217, 000003246379, 000003246434, 000003246725, 000003246934, 000003248574]"
| append 
    [| makeresults
    | eval data="[0010242946, 0002363081, 000006459131, 0010275565, 000000430019, 000000465470, 000000465546, 000003228900, 000003616661, 000003648249]"]
| table data
| eval data=replace(replace(data,"\[",""),"\]","")
| eval data=split(data,",")
| mvexpand data
| sort num(data)
____________________________________________
| makeresults | eval message= "Happy Splunking!!!"

View solution in original post

cpetterborg
SplunkTrust
SplunkTrust

And a little simpler version thane niketnilay's:

|  makeresults 
|  eval _raw="[000003074859, 000003075752, 000003224575, 000003228286, 000003235217, 000003246379, 000003246434, 000003246725, 000003246934, 000003248574]
[0010242946, 0002363081, 000006459131, 0010275565, 000000430019, 000000465470, 000000465546, 000003228900, 000003616661, 000003648249]" 
|  rex max_match=0 "(?P<myfield>\d+)" 
|  mvexpand myfield 
|  table myfield
|  dedup myfield

niketn
Legend

@cpetterborg... upvoting... obviously this is simpler working example 🙂

____________________________________________
| makeresults | eval message= "Happy Splunking!!!"
0 Karma

niketn
Legend

Following is a run-anywhere search based on your example. Pipes till | table data generates mock data. You would require the pipes after that. Also replace with your field name instead of data

| makeresults
| eval data="[000003074859, 000003075752, 000003224575, 000003228286, 000003235217, 000003246379, 000003246434, 000003246725, 000003246934, 000003248574]"
| append 
    [| makeresults
    | eval data="[0010242946, 0002363081, 000006459131, 0010275565, 000000430019, 000000465470, 000000465546, 000003228900, 000003616661, 000003648249]"]
| table data
| eval data=replace(replace(data,"\[",""),"\]","")
| eval data=split(data,",")
| mvexpand data
| sort num(data)
____________________________________________
| makeresults | eval message= "Happy Splunking!!!"

bpolsen
Explorer

Gentlemen,
Thank you so very much! I was able to cobble together a search that meets my needs, thanks to your guidance. Much appreciated!

0 Karma

niketn
Legend

@bpolsen, if one of the answer here has helped, do not forget to Accept the same to mark it as answered. Please also up vote the answered that have helped you. If you used something other than the examples here then kindly add that as answer and accept the same.

____________________________________________
| makeresults | eval message= "Happy Splunking!!!"
0 Karma

cpetterborg
SplunkTrust
SplunkTrust

You could do this at search time, and have a set of events with one of the items per event. Is that sufficient? Or do you require that the data come in to be indexed with one item per event?

0 Karma

bpolsen
Explorer

The data I show is the result of a search, and shows two events. I'd like to show the data as a dedup'ed series of rows, one entry per row. Thanks!

0 Karma

niketn
Legend

If the data shown here is result of search what is the fieldname?

____________________________________________
| makeresults | eval message= "Happy Splunking!!!"
0 Karma
Get Updates on the Splunk Community!

New in Observability - Improvements to Custom Metrics SLOs, Log Observer Connect & ...

The latest enhancements to the Splunk observability portfolio deliver improved SLO management accuracy, better ...

Improve Data Pipelines Using Splunk Data Management

  Register Now   This Tech Talk will explore the pipeline management offerings Edge Processor and Ingest ...

3-2-1 Go! How Fast Can You Debug Microservices with Observability Cloud?

Register Join this Tech Talk to learn how unique features like Service Centric Views, Tag Spotlight, and ...