All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

We need to fetch users activity such as  - 1. when user has accessed Splunk cloud platform 2. activities performed by users in Splunk cloud platform   Please provide the API request to fetch Audi... See more...
We need to fetch users activity such as  - 1. when user has accessed Splunk cloud platform 2. activities performed by users in Splunk cloud platform   Please provide the API request to fetch Audit logs or Event to determine Users activity from Splunk cloud platform
Hello Everyone, Wanted to know how to handle the cookie page in Synthetic scripting while creating a user journey, what command to use to accept/reject cookies any suggestion would be great help. T... See more...
Hello Everyone, Wanted to know how to handle the cookie page in Synthetic scripting while creating a user journey, what command to use to accept/reject cookies any suggestion would be great help. Thank you, Mahendra Shetty
My token is valid, I tried with 2 differents token created
Hi @scelikok I am Chaehee working at Splunk Korea office as Sales Engineer-GSS Intern. I'm experiencing a UTF-8 encoding error, just like the problem you experienced. I think I can modify the code ... See more...
Hi @scelikok I am Chaehee working at Splunk Korea office as Sales Engineer-GSS Intern. I'm experiencing a UTF-8 encoding error, just like the problem you experienced. I think I can modify the code like you advised, but my product is Splunk Cloud, so I can't modify the code. Have you ever solved encoding problems with Splunk Cloud products like me?
Hi Team, I am looking for the help to get an alert trigger if the latest result of timechart command is 0. Suppose i am running a search for last  8hrs with span=2hrs. so, if the result is some... See more...
Hi Team, I am looking for the help to get an alert trigger if the latest result of timechart command is 0. Suppose i am running a search for last  8hrs with span=2hrs. so, if the result is something like below should raise an alert. 12-18-23 00:00 ---> is "0" and also it should is display if there is "0" events in last 8hrs. as i am getting nothing, if no events during that time. Thank you,  
Will this issue be addressed by Splunk in the latest releases? It's not good to give admin all objects privileges to just read kvstore data. Thanks Anil
Thank You, this helped. 
I was able to achieve this on my local instance by a stats aggregation by "severity" field and then doing a transpose of results so that the splunk chary visualization will display it this way. Ex... See more...
I was able to achieve this on my local instance by a stats aggregation by "severity" field and then doing a transpose of results so that the splunk chary visualization will display it this way. Example of SPL: <base_search> | stats count as count by severity | transpose header_field=severity column_name=severity | fields + severity, critical, high, medium, low, informational   In the dashboard XML you should be able to add this option tag to your bar chart visualization to assign colors for each unique severity value. <option name="charting.fieldColors">{"critical":0xFF0000,"high":0xFF7F50,"medium":0xFFBF00,"low":0xDFFF00,"informational":0x40E0D0}</option>   Screenshot of results. Full SPL used to replicate on my local instance: | makeresults count=377 | eval severity="high" | append [ | makeresults count=1118 | eval severity="medium" ] | append [ | makeresults count=119 | eval severity="critical" ] | append [ | makeresults count=1001 | eval severity="low" ] | append [ | makeresults count=41 | eval severity="informational" ] | stats count as count by severity | transpose header_field=severity column_name=severity | fields + severity, critical, high, medium, low, informational
Looks like as of Splunk 8.1 you should be able to pipe in tokens into the sendemail command. https://docs.splunk.com/Documentation/Splunk/9.1.2/SearchReference/Sendemail If you have a scenario wh... See more...
Looks like as of Splunk 8.1 you should be able to pipe in tokens into the sendemail command. https://docs.splunk.com/Documentation/Splunk/9.1.2/SearchReference/Sendemail If you have a scenario where you may need to email multiple groups across different applications from one search than you may be able to utilize the "map" command piped directly to the "sendemail" command. (May need to some testing on this, but I'm pretty sure I have seen this done before)  Note: map command will attempt to dispatch a search for each row from the parent search returned. There is a default limit of the max searches it will attempt to send (maxsearches=10)  You can find more here. https://docs.splunk.com/Documentation/Splunk/9.1.2/SearchReference/Map Example: | search index IN ("app_index_1", "app_index_2") CASE(ERROR) | bucket span=1h _time ``` mapping notification user by eval or lookup or possible derived from the _raw data (This example is just setting a hardcoded list of users to each application) ``` | eval notification_users_email=case( 'index'=="app_index_1", mvappend("user_1@acme.com", "user_3@acme.com", "user_5@acme.com"), 'index'=="app_index_2", "user_2@acme.com" ) | stats count as error_count values(notification_users_email) as notification_users_email by _time, app ``` Trigger criteria for more than 50 errors for specific application in a 1 hour time window ``` | where 'error_count'>50 ``` prepare notification user field to be formatted for the sendemail command (convert multivalue field of unique values to a comma delimeted list of users) ``` | eval notification_users_email=mvjoin(notification_users_email, ",") ``` prepare message to send to email list for the application ``` | eval message='app'." had ".'error_count'." errors in a one hour time window. Please investigate..." ``` each row returned from the parent search will be piped into the map command and send out its own email to the list of users associated with the applications meeting the alert criteria ``` | map search="sendemail to=\"$notification_users_email$\" message=\"$message$\"" maxsearches=10    
Hi Everyone, I have a column chart for the below query. As shown in the below screenshot, the x-axis label is sorted in alphabetical order, but my requirement is display it in a static order (critic... See more...
Hi Everyone, I have a column chart for the below query. As shown in the below screenshot, the x-axis label is sorted in alphabetical order, but my requirement is display it in a static order (critical,high,medium,low,informational) and in additional can we have unique color for the bar for each x-axis label (ex:critical:red, high:green). Can someone guide me on how to implement these changes. Appreciate your help in advance!!   Query: `notable` | stats count by urgency
Perfect.  Thank you @dtburrows3 
I think this SPL will do what you are looking for. index=o365 UserloginFailed* | iplocation ClientIP | search Country!=Australia | stats values(Country) as Country by user | where mvcount(Country)>1
I have a search that returns a list of users and the country logins have occurred from grouped by user. index=o365 UserloginFailed* | iplocation ClientIP | search Country!=Australia | stats values(... See more...
I have a search that returns a list of users and the country logins have occurred from grouped by user. index=o365 UserloginFailed* | iplocation ClientIP | search Country!=Australia | stats values(Country) by user So if a user logins from one Country, then a get a single record for the user (user, Country).  If a user logins in from multiple locations, I get the user name in one column and a list of the source locations in the values(County) column. I would like to construct the search so that only see those users who have logins from multiple Countries. Thanks
Hello Everyone, I have created an alert who looks for the security events for few applications and if the condition matches it must notify users related to that specific application. Let's say we h... See more...
Hello Everyone, I have created an alert who looks for the security events for few applications and if the condition matches it must notify users related to that specific application. Let's say we have applications A, B and Application A has a field users with values test, test2, test3. and Application B has a field users with values test4, test5, test6, If Application A has any security breach events it must send an email to users. Regards, Sai
This worked,  thank you! I had looked at mvmap but this was not the how I was trying to use it - thanks for your help
@LearningGuy the eval statement is applied to all events, so based on using the eval if(isnull()... test, that would not do anything if the Student column is non-null. In your case, if you only want ... See more...
@LearningGuy the eval statement is applied to all events, so based on using the eval if(isnull()... test, that would not do anything if the Student column is non-null. In your case, if you only want to floor() the Score for the total column you would need to do | eval Score=if(Student="Total", floor(Score), Score) so you ONLY round the Score if it's the Total row
Nevermind @ITWhisperer beat me too it! There is probably a couple ways of doing this but this seemed to work for me on my local   <base_search> | eval time=strptime(TimeStamp, "%... See more...
Nevermind @ITWhisperer beat me too it! There is probably a couple ways of doing this but this seemed to work for me on my local   <base_search> | eval time=strptime(TimeStamp, "%Y-%m-%d %H:%M:%S.%Q") | appendpipe [ | bucket span=1m time | stats sum(SumTotalErrors) as sumErrors by time | eval bucket_type="1 minute" ] | appendpipe [ | bucket span=2m time | stats sum(SumTotalErrors) as sumErrors by time | eval bucket_type="2 minutes" ] | appendpipe [ | bucket span=3m time | stats sum(SumTotalErrors) as sumErrors by time | eval bucket_type="3 minutes" ] | appendpipe [ | bucket span=5m time | stats sum(SumTotalErrors) as sumErrors by time | eval bucket_type="5 minutes" ] | appendpipe [ | bucket span=1h time | stats sum(SumTotalErrors) as sumErrors by time | eval bucket_type="1 hour" ] | stats count as sample_size, avg(sumErrors) as avg_sumErrors by bucket_type | eval "Average Error Rate (Human Readable)"=round(avg_sumErrors, 0)." Errors per ".'bucket_type' | addinfo | eval search_time_window_sampled=tostring('info_max_time'-'info_min_time', "duration") | fields - info_*_time, info_sid | sort 0 +sample_size    Not quite a loop but I am curious about this so I will keep trying out different things.  Output should look something like this As for the dashboard, you can set up an input token (dropdown) to allow the user to select a span and use that token on the  | bucket span=$span$ time  then do your stats command.
You could try something like this | table _time SumTotalErrors | appendpipe [| stats avg(SumTotalErrors) as AverageBySpan by _time | eval Span="1m"] | appendpipe [| bin _time span=2m ... See more...
You could try something like this | table _time SumTotalErrors | appendpipe [| stats avg(SumTotalErrors) as AverageBySpan by _time | eval Span="1m"] | appendpipe [| bin _time span=2m | stats avg(SumTotalErrors) as AverageBySpan by _time | eval Span="2m"] | appendpipe [| bin _time span=3m | stats avg(SumTotalErrors) as AverageBySpan by _time | eval Span="3m"] | appendpipe [| bin _time span=5m | stats avg(SumTotalErrors) as AverageBySpan by _time | eval Span="5m"] | appendpipe [| bin _time span=10m | stats avg(SumTotalErrors) as AverageBySpan by _time | eval Span="10m"] | appendpipe [| bin _time span=1h | stats avg(SumTotalErrors) as AverageBySpan by _time | eval Span="1h"] | where isnotnull(AverageBySpan)
I agree with @richgalloway response here for this particular example.  However if you want a more generalize approach to parsing out a table being ingested in Splunk with this format in _raw (capabl... See more...
I agree with @richgalloway response here for this particular example.  However if you want a more generalize approach to parsing out a table being ingested in Splunk with this format in _raw (capable of accounting for different headers and not having to build out a regex for each time) you may want to try the SPL below. <base_search> ``` Doing this before an mvexpand occurring later in the search pipeline can help attribute the expanded data back to it's orginal event (if needed) ``` | streamstats count as event_count ``` Extract Info from table title (this is the one line that would need to be adjusted for each unique table depending on title format and data you would want extracted from it ``` | rex field=_raw "Disk\s+Utilization\s+for\s+(?<Server>[^\s]+)\s+in\s+(?<Region>[^\s]+)\s+(?<Environment>[^\s]+)\s+\-(?<Server_IP>[^\s]+)" ``` Parse individual rows from the table as a multivalued field ``` | spath input=_raw path=table.tr output=data | fields - _raw ``` mvexpand each row to it's own event ``` | mvexpand data ``` replace tags with standardized breakers for parsing ``` | eval item_data=replace(replace(replace(replace('data', "\<\/t(?:h|d)\>\<t(?:h|d)[^\>]*\>", "<BREAK>"), "^\<t(?:h|d)[^\>]*\>", "<START>"), "\<\/t(?:h|d)\>", "<END>"), "(\<START\>|\<BREAK\>)(\<BREAK\>|\<END\>)", "\1#NULL#\2") ``` set a row count for each table ``` | streamstats count as row_number by event_count ``` assign row type depending on the row number (assumes that the table has a header) ``` | eval row_number='row_number'-1, row_type=if( 'row_number'==0, "header", "data" ), ``` remove start and end tags ``` data_mv=split(replace(replace(item_data, "\<START\>", ""), "\<END\>", ""), "<BREAK>"), ``` trim of any leading or tailing spaces from each entry in the table ``` data_mv=case( mvcount(data_mv)==1, trim(data_mv, " "), mvcount(data_mv)>1, mvmap(data_mv, trim(data_mv, " ")) ) | fields - data, item_data ``` pull header fieldnames into each row of the table to stitch it all together ``` | eventstats list(eval(case('row_type'=="header", 'data_mv'))) as header by event_count ``` mvzip to attribute each table entry to the header field (we will loop through this field to build the table further down the search pipeline) ``` | eval data_mv_zip=mvzip(header, data_mv, "<DELIM>") | fields - header, data_mv ``` we no longer need the row that only holds the header info now that it has been pulled into each data entry row of the table ``` | where NOT 'row_type'=="header" ``` initialize json object that will hold KV pairs of header_fieldname/row_entry ``` | eval table_json=json_object() ``` loop through the MV field and build the json_object containing all KV assignments ``` | foreach mode=multivalue data_mv_zip [ | eval key=mvindex(split('<<ITEM>>', "<DELIM>"), 0), value=mvindex(split('<<ITEM>>', "<DELIM>"), 1), table_json=json_set(table_json, 'key', 'value') ] | fields - key, value, row_type, data_mv_zip ``` spath the json_object just built ``` | spath input=table_json ``` remove the json_object now that we dont need it ``` | fields - table_json ``` list out data in the order that you want it displayed ``` | table _time, event_count, Server, Region, Environment, Server_IP, "Filesystem", "Type", "Blocks", "Used %", "Available %", "Usage %", "Mounted on" ``` remove any placeholder #NULL# values with actual null() values since table has been generated ``` | foreach * [ | eval <<FIELD>>=if( '<<FIELD>>'=="#NULL#", null(), '<<FIELD>>' ) ] You can see the output I got locally using this method below. This method should also account for null values in a the html table as well, whereas the regex shared from your original post makes the assumption that table entries always have a leading and trailing space wrapping a non-null value. Which is true for this table you shared, but for generalizing parsing a html table you may run into issues doing it that way. As a proof on concept here is some output of some simulation data I put together with 3 separate events, each with their own html table and sets of data with their own headers and piping them through this method. Below is the SPL used to generate this screenshot. | makeresults | eval _raw="<p align='center'><font size='4' color=blue>Example: Table 1</font></p> <table align=center border=2> <tr style=background-color:#2711F0 ><th>sample_field_1</th><th>sample_field_2</th><th>sample_field_3</th><th>sample_field_4</th></tr> <tr><td>table1_row1_column1</td><td></td><td>table1_row1_column3</td><td></td></tr> <tr><td>table1_row2_column1</td><td>table1_row2_column2</td><td>table1_row2_column3</td><td>table1_row2_column4</td></tr> <tr><td>table1_row3_column1</td><td></td><td>table1_row3_column3</td><td>table1_row3_column4</td></tr> <tr><td>table1_row4_column1</td><td>table1_row4_column2</td><td>table1_row4_column3</td><td>table1_row4_column4</td></tr> <tr><td></td><td></td><td></td><td>table1_row5_column4</td></tr> </table></body></html>" | append [ | makeresults | eval _raw="<p align='center'><font size='4' color=blue>Example: Table 2</font></p> <table align=center border=2> <tr style=background-color:#2711F0 ><th>sample_field_5</th><th>sample_field_6</th><th>sample_field_7</th></tr> <tr><td></td><td>table2_row1_column2</td><td>table2_row1_column3</td></tr> <tr><td>table2_row2_column1</td><td>table2_row2_column2</td><td>table2_row2_column3</td></tr> <tr><td>table2_row3_column1</td><td></td><td>table2_row3_column3</td></tr> <tr><td></td><td>table2_row4_column2</td><td></td></tr> <tr><td>table2_row5_column1</td><td>table2_row5_column2</td><td>table2_row5_column3</td></tr> </table></body></html>" ] | append [ | makeresults | eval _raw="<p align='center'><font size='4' color=blue>Example: Table 3</font></p> <table align=center border=2> <tr style=background-color:#2711F0 ><th>sample_field_1</th><th>sample_field_2</th><th>sample_field_3</th><th>sample_field_4</th><th>sample_field_5</th><th>sample_field_6</th></tr> <tr><td>table3_row1_column1</td><td></td><td>table1_row1_column3</td><td></td><td>table1_row1_column5</td><td>table1_row1_column6</td></tr> <tr><td>table3_row2_column1</td><td>table3_row2_column2</td><td>table1_row2_column3</td><td>table3_row2_column4</td><td>table1_row2_column5</td><td>table1_row2_column6</td></tr> </table></body></html>" ] ``` Extract Table Titles ``` | spath input=_raw path=p.font output=table_title | streamstats count as event_count | spath input=_raw path=table.tr output=data | fields - _raw | mvexpand data | eval item_data=replace(replace(replace(replace('data', "\<\/t(?:h|d)\>\<t(?:h|d)[^\>]*\>", "<BREAK>"), "^\<t(?:h|d)[^\>]*\>", "<START>"), "\<\/t(?:h|d)\>", "<END>"), "(\<START\>|\<BREAK\>)(\<BREAK\>|\<END\>)", "\1#NULL#\2") | streamstats count as row_number by event_count | eval row_number='row_number'-1, row_type=if( 'row_number'==0, "header", "data" ), data_mv=split(replace(replace(item_data, "\<START\>", ""), "\<END\>", ""), "<BREAK>"), data_mv=case( mvcount(data_mv)==1, trim(data_mv, " "), mvcount(data_mv)>1, mvmap(data_mv, trim(data_mv, " ")) ) | fields - data, item_data | eventstats list(eval(case('row_type'=="header", 'data_mv'))) as header by event_count | eval data_mv_zip=mvzip(header, data_mv, "<DELIM>") | fields - header, data_mv | where NOT 'row_type'=="header" | eval table_json=json_object() | foreach mode=multivalue data_mv_zip [ | eval key=mvindex(split('<<ITEM>>', "<DELIM>"), 0), value=mvindex(split('<<ITEM>>', "<DELIM>"), 1), table_json=json_set(table_json, 'key', 'value') ] | fields - key, value, row_type, data_mv_zip | spath input=table_json | fields - table_json | fields + _time, table_title, event_count, sample_field_* | foreach * [ | eval <<FIELD>>=if( '<<FIELD>>'=="#NULL#", null(), '<<FIELD>>' ) ]  
Thanks @gcusello  Below query matches the correct count of all the statuscode " rex "(?ms)statusCode: (?<status_code>\d+)" | stats count by statusCode | table statusCode,count"