All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Do this: In the panel that you suspect information loss, click the magnifying glass ("Open in search").  Run the search again in the new window.  Post the two outputs if they are different. (Anonymiz... See more...
Do this: In the panel that you suspect information loss, click the magnifying glass ("Open in search").  Run the search again in the new window.  Post the two outputs if they are different. (Anonymize as needed. As @ITWhisperer says, chained search simply uses the results from the main search as if it is the interim output from part of the same search as shown in the new window.  The only difference is that the main search runs with its own job ID so multiple chained searches can use the same results.  No information should be lost. (Unless there is some memory/disk limits that prevents saving the complete results.)
Do you mean to only count difference and not showing host values?  In that case, add count.   `macro 1` | eval source = "macro1" | append [search `macro 2` | eval source = "macro2"] | stats... See more...
Do you mean to only count difference and not showing host values?  In that case, add count.   `macro 1` | eval source = "macro1" | append [search `macro 2` | eval source = "macro2"] | stats values(source) as source by host | where mvcount(source) < 2 AND source == "macro 1" | stats dc(host) as count_diff   If you don't get desired results, you need to examine data and post relevant data from each macro (anonymize as needed) as well as actual results. Here is an emulation based on your original mock data. macro 1 macro 2 | makeresults format=csv data="host a b c d" | makeresults format=csv data="host a b e f" Adding them together (because search command is not used, the subsearch looks different)   | makeresults format=csv data="host a b c d" | eval source="macro 1" | append [makeresults format=csv data="host a b e f" | eval source="macro 2"] | stats values(source) as source by host | where mvcount(source) < 2 AND source == "macro 1"   This gives host source c macro 1 d macro 1 If I add | stats dc(host), it gives me 2.
Did anyone find solution for this ? The mentioned solutions doesnt seem to work
Tq so much 
My two cents - as I always say - leave the time related values as numbers until you need to finally render them to string for presentation. That way it's easier to manipulate them  (sort, offset and ... See more...
My two cents - as I always say - leave the time related values as numbers until you need to finally render them to string for presentation. That way it's easier to manipulate them  (sort, offset and so on). So I'd do just _indextime - strptime(start_time, "%Y/%m/%d %H:%M:%S")
I am getting this error  Error in 'rex' command: Failed to initialize sed. Failed to parse the replacement string When I removed double quotes getting this ouput :           . type . failed on ... See more...
I am getting this error  Error in 'rex' command: Failed to initialize sed. Failed to parse the replacement string When I removed double quotes getting this ouput :           . type . failed on num  
I am working with Linux auditd events based on the auditd message and field dictionaries, that we call type and field. (You can access the github site for the .csv files that define message and field... See more...
I am working with Linux auditd events based on the auditd message and field dictionaries, that we call type and field. (You can access the github site for the .csv files that define message and fields.) For example, the macro name AUDIT_ADD_GROUP would be type=add_group and the macros name AUDIT_EXECVE would be type=execve. Now we have fields by type. SGID is the set group ID, so we could have fields called execve.sgid or add_group.sgid depending on the type value of the event. These are just 2 of more than 40 types we are tracking. Now each type will have its own set of applicable fields. For example, there would also be add_group.tty and add_group.proctitle. Is there a way to automatically lop off the prefix of a dot notation field on ingest? We need to standardize these fields to make them CIM compliant for our data model. The only alternative I see for now would be to use COALESCE to solve this problem. (e.g.: eval sgid = coalesce('group_add.sgid', 'execve.sgid')) Doing it this way would see COALESCE expressions with numerous paraeaters.
Hi @Beshoy.Shaher, Thanks for following up and sharing the solution! We love to see it. 
Hi All,   we have our server that's reaching EOL and is currently a deployment server for 4k clients and we need to migrate to new machine. can anyone help to tell the steps to test the connectivi... See more...
Hi All,   we have our server that's reaching EOL and is currently a deployment server for 4k clients and we need to migrate to new machine. can anyone help to tell the steps to test the connectivity with new ds and then ultimately migrate to new ds server 
Hi, I am not sure if this is possible at all or not, but I figured best to ask the experts before I keep spinning in circles. I have created a classic dashboard, and would like to add the ability... See more...
Hi, I am not sure if this is possible at all or not, but I figured best to ask the experts before I keep spinning in circles. I have created a classic dashboard, and would like to add the ability to toggle the visibility of the column chart data by having the user click on any of the desired legend label of the data series, and the columns belonging to that data visibility gets toggled Off or On.  So in the below example, the column chart is displaying 2 labels in the legend, "Used" and "Discount" at the start, and I would like to have the user toggle that view. I do not have access to the backend server and would like to do everything from the GUI. I would like the user to be able to click on the legend "Used" listed entry, and the column chart would remove the "Used" columns, and only display the "Discount" columns preferably expanded to the width of the column chart. I have seen it occur in one of the other column chart within the same dashboard, and  I have not added or modified anything to create that. The Drilldown option is set to None for this panel, and all other panels, yet by some magic the other panels sometimes behave to toggle Off/On the data being display by clicking on the legend labels. The section for this panel xml is below, and any help would be greatly appreciated: <panel> <chart id="chart1"> <title>Titte of the Dashboard</title> <search base="base_search"> <query>| search merchant IN ($merchant$) | chart sum(used) as Used sum(Discount) as Discount over _time by merchant | addcoltotals row=f col=t label="Totals" labelfield=merchant fieldname="Totals" Used Discount</query> </search> <option name="charting.axisTitleX.visibility">visible</option> <option name="charting.axisTitleY.visibility">visible</option> <option name="charting.axisTitleY2.visibility">visible</option> <option name="charting.axisY2.enabled">0</option> <option name="charting.chart">column</option> <option name="charting.chart.showDataLabels">none</option> <option name="charting.drilldown">none</option> <option name="charting.legend.placement">right</option> <option name="refresh.display">progressbar</option> <option name="trellis.enabled">0</option> </chart> </panel>
It seems like you may be able to accomplish what you want with an eval: index=cs | rex "Type=(?<type>[a-z]+)" | eval AResponse.BResponse.Message = replace('AResponse.BResponse.Message', "Ref number... See more...
It seems like you may be able to accomplish what you want with an eval: index=cs | rex "Type=(?<type>[a-z]+)" | eval AResponse.BResponse.Message = replace('AResponse.BResponse.Message', "Ref number \w+ failed on num: ", type." failed on num: ")  
Try using the concatenation operator to include the field from the first regex in the second. index=cs | rex "Type=(?<type>[a-z]+)" | rex field=AResponse.BResponse.Message mode=sed "s/Ref number+\w... See more...
Try using the concatenation operator to include the field from the first regex in the second. index=cs | rex "Type=(?<type>[a-z]+)" | rex field=AResponse.BResponse.Message mode=sed "s/Ref number+\w+\sfailed on num:*+/" . type . " failed on num: /g"
Was this issue ever resolved? Because I am running into the same issue currently
index=cs | rex "Type=(?<type>[a-z]+)" | rex field=AResponse.BResponse.Message mode=sed "s/Ref number+\w+\sfailed on num:*+/NetworkA failed on num: /g" Here I hardcoded NetworkA  in second rex ... See more...
index=cs | rex "Type=(?<type>[a-z]+)" | rex field=AResponse.BResponse.Message mode=sed "s/Ref number+\w+\sfailed on num:*+/NetworkA failed on num: /g" Here I hardcoded NetworkA  in second rex but actually its a dynamic value and it should be changed according to value present in field type How to use type value in second rex 
@PickleRick  Thank you so much for your quick response. However, no changes. I was trying to use props and transforms conf files, but not working as well My props transforms [myprops] REPORT... See more...
@PickleRick  Thank you so much for your quick response. However, no changes. I was trying to use props and transforms conf files, but not working as well My props transforms [myprops] REPORT-mytrans_fields=mytrans_fields [mytrans_fields] REGEX=\<(\w+[^\n\/\>]+)\/?\>([^\<\n][^\<]*) FORMAT=$1::$2 DEST_KEY=_raw   Any recommendations?
Thank you for suggesting, it worked
I've tried both of those. I forgot to put EventCode=  in a couple examples 
From UF installed:- [splunktcp] _rcvbuf = 1572864 acceptFrom = * connection_host = ip evt_dc_name = evt_dns_name = evt_resolve_ad_obj = 0 host = prdpl2bcl1101 index = default logRetireOldS2S = true l... See more...
From UF installed:- [splunktcp] _rcvbuf = 1572864 acceptFrom = * connection_host = ip evt_dc_name = evt_dns_name = evt_resolve_ad_obj = 0 host = prdpl2bcl1101 index = default logRetireOldS2S = true logRetireOldS2SMaxCache = 10000 logRetireOldS2SRepeatFrequency = 1d route = has_key:tautology:parsingQueue;absent_key:tautology:parsingQueue Splunkcloud inputs machine: [root@servername bin]# ./splunk btool inputs list splunktcp [splunktcp] _rcvbuf = 1572864 acceptFrom = * connection_host = ip host = servername.aligntech.com index = default logRetireOldS2S = true logRetireOldS2SMaxCache = 10000 logRetireOldS2SRepeatFrequency = 1d route = has_key:_replicationBucketUUID:replicationQueue;has_key:_dstrx:typingQueue;has_key:_linebreaker:rulesetQueue;absent_key:_linebreaker:parsingQueue [splunktcp://9997] _rcvbuf = 1572864 connection_host = ip host = servername.aligntech.com index = default
iirc it is when you use "last hour" (for example) as the latest become the string "now" which confuses relative_time although it might also be when you use advanced as you can get an epoch time.
cant we update this query in some way to get both the results in one pie, when using trells it is giving two piechart, which is not helpful.