All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I am working on obtaining all user logins for a specified domain, then displaying what percent of those logins were from compliant devices. I start by creating a couple fields for 'ease of reading' -... See more...
I am working on obtaining all user logins for a specified domain, then displaying what percent of those logins were from compliant devices. I start by creating a couple fields for 'ease of reading' - these fields do produce data as expected, however, the table comes out with 'null' for the percent values. I have tried the below variations in pipeflow unfortunately with similar results - when trying to create a 'total' value by creating then combining compliant and noncompliant to divide, the total field does not have data either. base search | eval DeviceCompliance='deviceDetail.isCompliant' | eval compliant=if(DeviceCompliance="true",DeviceCompliance,null()) | stats count as total by userPrincipalName | eval percent=((compliant/total)*100) | table userPrincipalName total percent base search | eval DeviceCompliance='deviceDetail.isCompliant' | eval compliant=if(DeviceCompliance="true",DeviceCompliance,null()) | eval noncompliant=if(DeviceCompliance="false",DeviceCompliance,null()) | eval total=sum(compliant+noncompliant) | stats count by userPrincipalName | table userPrincipalName compliant total | eval percent=((compliant/total)*100) | table userPrincipalName total percent  
I have to create a base search for a dashboard and I am kinda stuck. Any help would be appreciated. index=service msg.message="*uri=/v1/payment-options*" eHttpMethodType="GET" | fields index, msg.s... See more...
I have to create a base search for a dashboard and I am kinda stuck. Any help would be appreciated. index=service msg.message="*uri=/v1/payment-options*" eHttpMethodType="GET" | fields index, msg.springProfile,msg.transactionId,eHttpStatusCode,eHttpMethodType,eClientId,eURI | dedup msg.transactionId | rename msg.springProfile as springProfile | eval profile = case(like(springProfile, "%dev%"), "DEV", like(springProfile, "%qa%"), "QA", like(springProfile, "%uat%"), "UAT") | eval request= case(like(eURI, "%/v1/payment-options%"), "PaymentOptions", like(eURI, "%/v1/account%"), "AccountTransalation") | stats count as "TotalRequests", count(eval(eHttpStatusCode=201 or eHttpStatusCode=204 or eHttpStatusCode=200)) as "TotalSuccessfulRequests", count(eval(eHttpStatusCode=400)) as "Total400Faliures", count(eval(eHttpStatusCode=422)) as "Total422Faliures", count(eval(eHttpStatusCode=404)) as "Total404Faliures", count(eval(eHttpStatusCode=500)) as "Total500Faliures", by profile, eClientId Now that I want to include the stats in the basesearch else my values/events  would be truncated. My problem is I need to also count  | stats count as "TotalRequests", count(eval(eHttpStatusCode=201 or eHttpStatusCode=204 or eHttpStatusCode=200)) as "TotalSuccessfulRequests" by request  for each of the profile such as Dev, QA,UAT to display in 3 different panels. How to incorparate this in the above basesearch
I am getting error "could not create search". How to fix this error ? xml:: <input type="multiselect" token="environment"> <label>Environments</label> <choice value="cfp08">p08</choice> ... See more...
I am getting error "could not create search". How to fix this error ? xml:: <input type="multiselect" token="environment"> <label>Environments</label> <choice value="cfp08">p08</choice> <choice value="cfp07">p07</choice> <choice value="*">ALL</choice> <default>*</default> <valuePrefix>environment =</valuePrefix> <delimiter> OR </delimiter> <search> <query/> </search> <fieldForLabel>environment</fieldForLabel> <fieldForValue>environment</fieldForValue> </input>
Hello Splunker!!   Here’s your question rewritten in a business context and structured in points: 1. Objective: To free up disk space by deleting 1 month of data from a specific Splunk index conta... See more...
Hello Splunker!!   Here’s your question rewritten in a business context and structured in points: 1. Objective: To free up disk space by deleting 1 month of data from a specific Splunk index containing 1 year of data. 2. Key Considerations: - How can we verify that the deletion of 1 month of data from Splunk indexes is successful? - How long does Splunk typically take to delete this amount of data from the indexes? - Is there a way to monitor or observe the deletion of old buckets or data using the Splunk UI (via SPL queries)?   Thanks in advance!!  
Hi All,  We have created a table viz containing 2 level of dropdowns which has the same index and sourcetype. While implementing the Row Expansion JScript in the dashboard, we are getting the result... See more...
Hi All,  We have created a table viz containing 2 level of dropdowns which has the same index and sourcetype. While implementing the Row Expansion JScript in the dashboard, we are getting the results in 2 levels, however, the second level expansion will get exit abruptly.    Also, we could notice that the pagination only works in the first level table (Inner child table row expansion) for the initial row we select and only once.  If we select the second row/entry in the same parent table, the Inner child table pagination will be in a freeze state. We need to reload the dashboard everytime to fix this. 
Hello, I am looking to configure POST request using webhook as an Alert action.But i can't see any authentication  How i add authentication in webhook
Hey Guys, I have a input that is monitoring a log from syslog. In this file theres data of multiple severity, its bad, but I was thinking I could use a transform to set sourcetype in props that I ... See more...
Hey Guys, I have a input that is monitoring a log from syslog. In this file theres data of multiple severity, its bad, but I was thinking I could use a transform to set sourcetype in props that I could use to format data. So I did this in inputs.conf: [udp://x.x.x.x:5514] index=cisco_asa sourcetype=cisco_firewall disabled=false and this logs from cisco asa Sep 20 15:36:41 10.10.108.122 %ASA-4-106023: Deny tcp src inside:x.x.x.x/xxxx dst outside:x.x.x.x/xxxx by access-group "Inside_access_in" [0x51fd3ce2, 0x0] Sep 20 15:36:37 10.10.108.122 %ASA-5-746015: user-identity: [FQDN] go.microsoft.com resolved x.x.x.x Sep 20 15:36:37 10.10.108.122 %ASA-6-302021: Teardown ICMP connection for faddr x.x.x.x/x gaddr x.x.x.x/x laddr x.x.x.x/x type 8 code 0 then I created a transforms.conf [set_log_type_critical] source_key = _raw regex = .*%ASA-4 dest_key=MetaData:Sourcetype format=sourcetype::cisco:firewall:alert [set_log_type_error] source_key = _raw regex = .*%ASA-5 dest_key=MetaData:Sourcetype format=sourcetype::cisco:firewall:critical [set_log_type_warnig] source_key = _raw regex = .*%ASA-6 dest_key=MetaData:Sourcetype format=sourcetype::cisco:firewall:error I also have a props that looks like [cisco:firewall] TRANSFORMS-setlogtype_alert=set_log_tyoe_critical TRANSFORMS-setlogtype_critical=set_log_tyoe_error TRANSFORMS-setlogtype_error=set_log_tyoe_warning My question is this: after all that I configured it, but the sourcetype separation is still not possible Do transforms and props look correct? Im testing locally so I can break things all day long. thanks for the assistance
Is there any linkdn profile to get all splunk updates and related cyber threats??
Hi everyone, I’m working on a dashboard in Dashboard Studio and need some guidance on exporting it as a PDF. Specifically, I would like to know how to split a multi-panel dashboard into several page... See more...
Hi everyone, I’m working on a dashboard in Dashboard Studio and need some guidance on exporting it as a PDF. Specifically, I would like to know how to split a multi-panel dashboard into several pages when downloading it as a PDF. Is there a way to configure the layout or settings to achieve this? Any tips or best practices for organizing content in Dashboard Studio to ensure each section appears on a separate PDF page would be greatly appreciated! Thanks in advance for your help!
Using windows 10, I installed splunk onto the drive folder itself (not the drive where windows was installed) and then I wasn't able to access the drive. Properties showed it had 0 storage and the de... See more...
Using windows 10, I installed splunk onto the drive folder itself (not the drive where windows was installed) and then I wasn't able to access the drive. Properties showed it had 0 storage and the default name of the drive in "My PC" was NFTS drive or something. Could not find the uninstall button in the apps settings, nor could I find any services related to splunk in windows services or task manager. I couldn't use the splunk application itself either. Couln't find a splunk folder in C drive either. I tried to run: chkdsk X: /f /r in CMD and I got the error "Chkdsk cannot dismount the volume because it is a system drive or there is an active paging file on it". I couldn't format the drive because it said it was in use. I ended up booting safe mode and formatted the drive there which has solved all my issues, but anyone know what the issue was?
Here is a really simple dashboard:   <form version="1.1" theme="light"> <label>Simple input</label> <fieldset submitButton="false"> <input type="text" token="text_tok" searchWhenChanged="tr... See more...
Here is a really simple dashboard:   <form version="1.1" theme="light"> <label>Simple input</label> <fieldset submitButton="false"> <input type="text" token="text_tok" searchWhenChanged="true"> <label></label> <default></default> </input> </fieldset> <row> <panel> <event> <search> <query>| makeresults | eval INPUT = if(len("$text_tok$") &gt;0, "$text_tok$", "(none)")</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="list.drilldown">none</option> <option name="refresh.display">progressbar</option> </event> </panel> </row> </form>   Its function is really simple: When nothing is entered into the text input, display something like INPUT _time (none) 2024-09-28 17:33:54 Indeed, when I click the magnifying glass ("Open in search"), that's what I get If any string is entered, that string will be displayed.  For example, if a single letter "a" is entered, it should display INPUT _time a 2024-09-28 17:31:31 Just as well, "Open in search" gives this output However, no matter what is entered or not entered, the dashboard panel always says "Search did not return any events."   Test is done in Splunk 9.3.0. 
I have a KPI alert using adhoc search which outputs custom fields and then custom alert action is configured on Notable aggregation policies ( NEAP) action rules which trigger the action on KPI notab... See more...
I have a KPI alert using adhoc search which outputs custom fields and then custom alert action is configured on Notable aggregation policies ( NEAP) action rules which trigger the action on KPI notable event . alert_actions.conf has all the params defined. But $results.fieldname$ is always blank on the script.  results_file only have ITSI /KPI specific fields but do not have the custom fields.   How   
Hi All I am using Office365,  i have an office365 unified group and users are getting removed from this office365 group automatically everyday.  I want to get the data who has removed or added the us... See more...
Hi All I am using Office365,  i have an office365 unified group and users are getting removed from this office365 group automatically everyday.  I want to get the data who has removed or added the users to this group. When i use the below query, I am not getting any output please guide me. Lets say my group name is MyGroup1 and its email address is MyGroup1@contoso.com sourcetype=o365:management:activity (Operation="*group*") unifiedgroup="*MyGroup1*" | rename ModifiedProperties{}.NewValue AS ModAdd | rename ModifiedProperties{}.OldValue AS ModRem | rename UserId AS "Actioned By" | rename Operation AS "Action" | rename ObjectId AS "Member" | rename TargetUserOrGroupName as modifiedUser | table _time, ModAdd, ModRem, "Action", Member, "Actioned By" "modifiedUser" | stats dc values("modifiedUser") by Action "Actioned By"
I was working with DataModels and I came across something strange about them when they are accelerated vs when they are not.   I created 2 DataModels, TestAccelerated and TestNotAccelerated. They ... See more...
I was working with DataModels and I came across something strange about them when they are accelerated vs when they are not.   I created 2 DataModels, TestAccelerated and TestNotAccelerated. They are a copy of each other with a few differences. The name/id, and one is accelerated and the other is not.   When I run a query to get the count of "MyValue" inside of field "MyID", I get different results. The Accelerated Data Model returns less records, with different grouping of _time than the Non-Accelerated DataModel.   I'm curious if anyone knows what the seach difference really is for both accelerated and non accelerated data models.   The count ends up being the same, so no issue finding out the count of "MyValue".   I see an issue if we are piping the output into a different command that uses the rows for information and not the count in each row, such as `|  geostats`.   Query to a non-accelerated data model: Query to an accelerated data model:    
Hi,  I'm having a hard time trying to narrow down my search results.  I would like to return only the results that contain the following string on the message: "progress":"COMPLETED","subtopics":"C... See more...
Hi,  I'm having a hard time trying to narrow down my search results.  I would like to return only the results that contain the following string on the message: "progress":"COMPLETED","subtopics":"COMPLETED" The text must be all together, in the sequence above.  I tried to add a string like the one below in my search but it didn't work: message="*\"progress\":\"COMPLETED\",\"subtopics\":\"COMPLETED\"*" Does anyone have suggestions on how to do that?  I appreciate any help you can provide.  
My linux_audit logs increased after updating apps and causing license manager to go over limit. Anyone know a fix for this, I have looked for the stanzas on the backend but not able to find out where... See more...
My linux_audit logs increased after updating apps and causing license manager to go over limit. Anyone know a fix for this, I have looked for the stanzas on the backend but not able to find out where these logs are coming from. 
I would like to compare specific response status stats vertically and not horizontally so that the values line up and do not rely on the appendcols command. My search: | multisearch [search NOT ... See more...
I would like to compare specific response status stats vertically and not horizontally so that the values line up and do not rely on the appendcols command. My search: | multisearch [search NOT status IN ( 200, 203, 204, 302, 201, 202, 206, 301, 304, 404, 500, 400, 401, 403, 502, 504 ) earliest=-4h@m latest=@m | eval date="Today"] [search NOT status IN (200, 203, 204, 302, 201, 202, 206, 301, 304, 404, 500, 400, 401, 403, 502, 504 ) earliest=-4h@m-1w latest=@m-1w | eval date="LastWeek"] | timechart span=1d count by status Example display of current results Desired results Status Today LastWeek 412 1 0 413 1 0 415 0 1 418 0 2 422 6 7
The default value of the product selection should be 'latest'. The token for the default value is determined by a hidden search for the latest product. This is dependent on the selected device. If th... See more...
The default value of the product selection should be 'latest'. The token for the default value is determined by a hidden search for the latest product. This is dependent on the selected device. If the device selection changes, the product selection should revert to the default value, which is the latest product ID for the newly selected device. Currently, setting the latest product ID upon device change is not functioning. How can I resolve this issue?   <search id="base_search"> <query> | mpreview index="my_index" | search key IN $token_device$ </query> <earliest>$token_time.earliest$</earliest> <latest>$token_time.latest$</latest> <refresh>300</refresh> </search> <input id="select_device" type="dropdown" token="token_device" searchWhenChanged="true"> <label>Device</label> <selectFirstChoice>true</selectFirstChoice> <prefix>(</prefix> <suffix>)</suffix> <valuePrefix>"</valuePrefix> <valueSuffix>"</valueSuffix> <search> <query> | mpreview index="my_index" | stats count by key | fields key | lookup device-mapping.csv ... | fields key full_name </query> </search> <fieldForLabel>full_name</fieldForLabel> <fieldForValue>key</fieldForValue> <delimiter>,</delimiter> <change> <unset token="token_product"></unset> <unset token="form.token_product"></unset> </change> </input> <search> <query> | mpreview index="my_index" | search key IN $token_device$ | stats latest(_time) as latest_time by product_id | sort -latest_time | head 1 | fields product_id </query> <earliest>-24h@h</earliest> <latest>now</latest> <done> <condition match="$job.resultCount$ != 0"> <set token="latest_product_id">$result.product_id$</set> </condition> <condition match="$job.resultCount$ == 0"> <set token="latest_product_id">*</set> </condition> </done> </search> <input id="select_product" type="multiselect" token="token_product" searchWhenChanged="true"> <label>Product</label> <default>$latest_product_id$</default> <prefix>(</prefix> <suffix>)</suffix> <valuePrefix>"</valuePrefix> <valueSuffix>"</valueSuffix> <choice value="*">All</choice> <search base="base_search"> <query> | stats latest(_time) as latest_time by product_id | eventstats max(latest_time) as max_time | eval label=if(latest_time == max_time, "latest", product_id) | sort - latest_time | fields label, product_id </query> </search> <fieldForLabel>label</fieldForLabel> <fieldForValue>product_id</fieldForValue> <delimiter>,</delimiter> <change> <condition label="All"> <set token="token_product">("*") AND product_id != "LoremIpsum"</set> </condition> </change> </input>  
Hi It is possible to convert enterprise command line      bin/splunk btool limits list --app=MX.3_MONITORING_v3 --debug     To a rest command to be run from SPL in the cloud, please?   Thank... See more...
Hi It is possible to convert enterprise command line      bin/splunk btool limits list --app=MX.3_MONITORING_v3 --debug     To a rest command to be run from SPL in the cloud, please?   Thanks in advance
Hello, I am confused about the "Expires" thing when setting an alert. I have my alert scheduled every day and the expires = 24 hours, does that mean after 24 hours the alert will NOT run no more? Tha... See more...
Hello, I am confused about the "Expires" thing when setting an alert. I have my alert scheduled every day and the expires = 24 hours, does that mean after 24 hours the alert will NOT run no more? Thank you.