All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, I'm trying to utilize the following script (at the bottom) in the 'Run Script' shape in the Windows Remote Management app in Splunk Soar. This shape errors out due to curly braces used in the... See more...
Hello, I'm trying to utilize the following script (at the bottom) in the 'Run Script' shape in the Windows Remote Management app in Splunk Soar. This shape errors out due to curly braces used in the while loop and if statement along with the parameter {0} which will represent a Windows service.  What are my options to get around the use of this curly brace? I've thought about using the custom code section, but I'm unsure how to set the script_str parameter without touching non-custom code. Attached is a screenshot detailing what I have with the custom code idea. Script to verify the status of a Windows service, looping to check every 30 seconds for 15 minutes: $timeout = new-timespan -Minutes 15 $sw = [diagnostics.stopwatch]::StartNew() $status $success=0 while ($sw.Elapsed -lt $timeout -AND $success -eq 0){ $status = Get-Service "{0}" | select -ExpandProperty status if ($status -contains "Stopped"){ $success = 1 } start-sleep -seconds 30 } write-output $success
So I've recently got into a new job, where I'm learning Splunk and learning how to support splunk searches and dashboards left behind by someone else.  I'm currently trying to go through a lot of ... See more...
So I've recently got into a new job, where I'm learning Splunk and learning how to support splunk searches and dashboards left behind by someone else.  I'm currently trying to go through a lot of the previous worker's searches, and I'm trying to understand how they all work. Right now I'm looking at a search that is part of a larger dashboard, and whenever I want to run this bit as an individual search, it's giving me the error "Error in 'EvalCommand': The expression is malformed. Expected )." The search itself is: index=vuln_vulnscan sourcetype=tenable:sc:vuln severity.id>=2 OR pluginID="19506" earliest=-12d latest=now() [ search index=inventory_snow ((sourcetype=snow:cmdb_ci_network_adapter AND ("ip_address\"\: \"56." OR "ip_address\"\: \"170.214")) OR (sourcetype=snow:cmdb_ci_computer) OR (sourcetype="snow:cmdb_ci_server")) dv_u_eir="*$eir$*" dv_u_environment="$eir_env$" earliest=-2d latest=now() | fields dv_name | stats latest(*) as * by dv_name | lookup dnslookup clienthost as dv_name OUTPUT clientip as ip | table ip] | fields pluginID dnsName ip port severity.name pluginName synopsis solution firstSeen lastSeen severity.id patchPubDate pluginText | stats latest(*) as * by ip, pluginID, port | eval patchAvailable="No Patch Available/Requires Manual Fix" | eval patchAvailable=if(((patchPubDate>relative_time(now(),"-30d"))),"0d-30d",patchAvailable) | eval patchAvailable=if(((patchPubDate<relative_time(now(),"-30d")) AND (patchPubDate>relative_time(now(),"-60d"))),"30d-60d",patchAvailable) | eval patchAvailable=if(((patchPubDate<relative_time(now(),"-60d")) AND (patchPubDate>relative_time(now(),"-90d"))),"60d-90d",patchAvailable) | eval patchAvailable=if((patchPubDate<relative_time(now(),"-90d") AND (patchPubDate>relative_time(now(),"-180d"))), "90d-180d",patchAvailable) | eval patchAvailable=if((patchPubDate<relative_time(now(),"-180d") AND (patchPubDate>relative_time(now(),"-365d"))), "180d-365d", patchAvailable) | eval patchAvailable=if((patchPubDate<relative_time(now(),"-365d") AND (patchPubDate>0)), "365d+", patchAvailable) I understand most of this search, but I don't understand why Splunk would be giving this error. I've went over it with a finetoothed comb and I couldn't find any missing ")" symbols anywhere. There's no eval in the subsearch, and all the eval commands I see have the proper grammar for the program. Is it something to do with the fact that I copied this out of a larger dashboard?
I know how to create a drilldown in my dashboard (classic) to link a search with the selected input that I click on for a visualization but is it possible to have the results displayed in a different... See more...
I know how to create a drilldown in my dashboard (classic) to link a search with the selected input that I click on for a visualization but is it possible to have the results displayed in a different manner? Particularly I'm looking to have the search run and results displayed in a pop-up box rather than a separate tab. I couldn't find any documentation addressing this specifically and I'm not great with XML. Either a point in the right direction or an answer would be greatly appreciated! NOTE: In the instance I'm working with I cannot add .js files to reference so if it requires javascript, I'd have to put it in my dashboard code.
I have a logfile with information like this - 2023-04-05 13:54:17.259 INFO [http-nio-8080-exec-117][OTPViewController:206] The list of the form bean for Kubra 2023-04-05 13:54:17.260 INFO [http-ni... See more...
I have a logfile with information like this - 2023-04-05 13:54:17.259 INFO [http-nio-8080-exec-117][OTPViewController:206] The list of the form bean for Kubra 2023-04-05 13:54:17.260 INFO [http-nio-8080-exec-117][OTPViewController:207] Payment Request ID for debug the Kubra payment. DanBkDg981 2023-04-05 13:54:17.260 INFO [http-nio-8080-exec-117][OTPViewController:208] Amount Number . 00902418 2023-04-05 13:54:17.260 INFO [http-nio-8080-exec-117][OTPViewController:209] Policy Number. 05349531 2023-04-05 13:54:17.261 INFO [http-nio-8080-exec-117][OTPViewController:210] Address. 2912 9TH ST W 2023-04-05 13:54:17.261 INFO [http-nio-8080-exec-117][OTPViewController:211] Email. test@aol.com 2023-04-05 13:54:17.262 INFO [http-nio-8080-exec-117][OTPViewController:212] Pmt Amount . 999.00 2023-04-05 13:54:17.262 INFO [http-nio-8080-exec-117][OTPViewController:213] Pmt Date . 05012023I Need a report in table format for these columns:     "RequestID" "Policy Number" "Email" "Address" "Amount Number" "Pmt Amount" "Pmt Date"     We can search based on the keyword "OTPViewController" and should look for consecutive thread number "http-nio-8080-exec-117" and extraction of value should start from the keyword and the dot "." Will appreciate your feedback and time.
Hi Team I am getting below warning notification from indexers , can someone help how to clear this .   "Search peer XXXX has the following message: Events are not displayed in the search result... See more...
Hi Team I am getting below warning notification from indexers , can someone help how to clear this .   "Search peer XXXX has the following message: Events are not displayed in the search results because _raw fields exceed the limit of 16777216 characters. Ensure that _raw fields are below the given character limit or switch to the CSV serialization format by setting 'results_serial_format=csv' in limits.conf. Switching to the CSV serialization format will reduce search performance"
(edited to give a more accurate example) I have an input that is json, but then includes escaped json a much more complex version of the example below (many fields and nesting both outside of messa... See more...
(edited to give a more accurate example) I have an input that is json, but then includes escaped json a much more complex version of the example below (many fields and nesting both outside of message and in the message string), so this isn't just a field extraction of a particular field, I need to tell splunk to extract the message string (removing the escaping) and then parse that as json   { "message": "{\"foo\": \"bar\", \"baz\": {\"a\": \"b\", \"c\": \"d\"}}" }   I can extract this at search time with rex and spath, but would prefer to do so at index time.  parsing this message with jq .message -r |jq . gives: { "foo": "bar", "baz": { "a": "b", "c": "d" } } what I ideally want is to have it look ike: { "message": { "foo": "bar", "baz": { "a": "b", "c": "d" } } }
I often use job inspector to troubleshoot my dashboards. I add the following code after my query:  <progress><eval token="strSearch">$job.search$</eval></progress> And add the following html whic... See more...
I often use job inspector to troubleshoot my dashboards. I add the following code after my query:  <progress><eval token="strSearch">$job.search$</eval></progress> And add the following html which shows me the search string while I'm in Edit mode.  This is very handy as it helps with troubleshooting for search syntax errors as well as displaying all the token values that constructed the search string:   <row><panel><html><pre>$strSearch$ </pre></html></panel></row> This all works fine except when my query is based on a base search (i.e. I'm doing post-processing searches).  Then the above code only shows me the base search portion but not the full search.   Is there a way to display the full search?
below is my json file. I want to notify whenever  there is a change in last property , "displayName": Included Updated Properties when newvalue:false and oldvalue:true. please let me know the search ... See more...
below is my json file. I want to notify whenever  there is a change in last property , "displayName": Included Updated Properties when newvalue:false and oldvalue:true. please let me know the search query json file resultReason: targetResources: [ [-] { [-] administrativeUnits: [ [+] ] displayName: Authorization Policy id: abc modifiedProperties: [ [-] { [-] displayName: PermissionGrantPolicyIdsAssignedToDefaultUserRole newValue: ["microsoft-user"] oldValue: ["Manage"] } { [-] displayName: Included Updated Properties newValue: "true" oldValue: "false" } { [+] } ]
I have a lookup table with an event name with min max thresholds I need to join this (left on the lookup) with the event count by with null fill on events not present in search lastly - I need ro... See more...
I have a lookup table with an event name with min max thresholds I need to join this (left on the lookup) with the event count by with null fill on events not present in search lastly - I need rowwise comparison of event count against min / max and conditional format  coloring rows with counts out of band. I am able to left join the data but I am unable to proceed beyond that as I am not able to reference the attributes for any additional evals Any help or direction would be greatly appreciate: | inputlookup bk_lookup.csv | join type=left left=L right=R where L.alertCode = R.alertCode [search index=my_index log_group="/my/log/group" "*cache*" | rex field=event.message "alertCode: (?<alertCode>.*), version: (?<version>.*)" | stats count as invokes by alertCode] | table L.alertCode, R.invokes, L.min, L.max | fillnull value=0 R.invokes
Hello, I am fairly new to the splunk administration side of things. I am attempting to change the ingest index for WinEventLog from index=main > index=windows I am on a single instance splunk ent... See more...
Hello, I am fairly new to the splunk administration side of things. I am attempting to change the ingest index for WinEventLog from index=main > index=windows I am on a single instance splunk enterprise system. I have tried two methods both on the splunk server: 1.  props.conf and transforms.conf /opt/splunk/etc/system/local/transforms.conf: [RedirectWinEventLog] REGEX = WinEventLog DEST_KEY = _MetaData:Index FORMAT = windows [Redirectwineventlog] REGEX = wineventlog DEST_KEY = _MetaData:Index FORMAT = windows /opt/splunk/etc/system/local/props.conf: [WinEventLog] TRANSFORMS-index = RedirectWinEventLog [wineventlog] TRANSFORMS-index = Redirectwineventlog After restarting splunk and the Windows UF, the logs were still going to index=main 2. input.conf /opt/splunk/etc/system/local/inputs.conf [WinEventLog] index = windows After restarting splunk and the Windows UF, the logs were still going to index=main   I am at my witts end searching documentation and forums. Any help would be greatly appreciated.    
I should clarify I am not talking about the "ignoreOlderThan" setting which by my understanding only looks at the file date, what I'm interested in here is ignoring events themselves that are older t... See more...
I should clarify I am not talking about the "ignoreOlderThan" setting which by my understanding only looks at the file date, what I'm interested in here is ignoring events themselves that are older than a certain date.  We have a lot of events come in through methods other than file ingestion, and a lot of the time we get events that are older than the configured retention age on an index, which cause early eviction of other events lumped into the bucket.  I'm looking for a way if possible, to look at events on the heavy forwarders and reject them if they are older than a certain age, ideally per index if possible as well.
Hi,  Could anyone help me with this use case as I'm trying to figure out my alert logic scanner use case scanning many ips on many ports
Hi, I have log files coming at different times, but i need to compare logs of same time. 1-----Log1 - file received for every 30mins, ex : 12:30,13:00, 13:30,14:00, 14:30,15:00, 15:30 2-----Log2 ... See more...
Hi, I have log files coming at different times, but i need to compare logs of same time. 1-----Log1 - file received for every 30mins, ex : 12:30,13:00, 13:30,14:00, 14:30,15:00, 15:30 2-----Log2 - file received for every 2hrs ex: 9:00, 11:00 , 13:00 , 15:00 Here, I need to compare the Log1 and Log2 only after receiving Log2, but catch here is if I run report at for 4hrs , results will be picked from Log1 for which we don't have data in Log2, which is not correct. I need  results matching same time in both logs. Is there any way to schedule    
Hello All, I need your help to understand the impact of time ranges selected by users while running their search query. Some users may be running their SPL with longer time range such as: - All Tim... See more...
Hello All, I need your help to understand the impact of time ranges selected by users while running their search query. Some users may be running their SPL with longer time range such as: - All Time, 30 days, Year to date, Previous year. Does this longer time range have any implications on Splunk search, server memory, CPU utilization, slowness or any other negative impact. And how can we measure those implications? I reached out to seek details on the same on a different forum and got below details: - 1. Time range has massive impact on resources required. 2. It impacts speed and memory utilization. 3. It requires more resources.   Thus, I need your help to understand the impact in detail. The goal is to measure the implications of saved searches having longer time range and what is the potential gain if we identify and alter those searches with narrow and smaller time range.  And how do we determine which time range is excessive to use among the available predefined options like:- Last 24 hours Last 7 days Last 30 days Week to date Month to date Year to date and many more.   Any information from your end will be very helpful. Thank you
I want to create a list per index of all the sourcetypes under it and the key value pairs set in the sourcetypes and I want to export this to a cvs file. index=* Name                             ... See more...
I want to create a list per index of all the sourcetypes under it and the key value pairs set in the sourcetypes and I want to export this to a cvs file. index=* Name                                                                    Value CHARSET                                                            UTF-8 MAX_TIMESTAMP_LOOKAHEAD              23 etc:
Hello! I am trying to create a punch card panel in a dashboard. I want to sort the X field on week number in ascending order and the Y on weekday in ascending order. My issue is that however I form... See more...
Hello! I am trying to create a punch card panel in a dashboard. I want to sort the X field on week number in ascending order and the Y on weekday in ascending order. My issue is that however I formulate the query, one of the axises will be in descending and the other in ascending. Here is my query:       index="*******" sourcetype=_json | stats latest(Number_of_Commits) as Commits by week Day | eval week_sort = case( Day=="Monday", 1, Day=="Tuesday", 2, Day=="Wednesday", 3, Day=="Thursday", 4, Day=="Friday", 5, Day=="Saturday", 6, Day=="Sunday", 7 ) | eval sort_field = week_sort*10 + case( Day=="Monday", 1, Day=="Tuesday", 2, Day=="Wednesday", 3, Day=="Thursday", 4, Day=="Friday", 5, Day=="Saturday", 6, Day=="Sunday", 7 ) | sort sort_field week_sort       The result of this query is as follows: What am I missing to make both X & Y axises sort in ascending order? Thanks in advance
I need some help to create a pie chart of songs using this raw data. The command I'm using is this:   |rex (?<track>(?<=title=)"(.*?)".*$) |stats count by track   This isn't showing ... See more...
I need some help to create a pie chart of songs using this raw data. The command I'm using is this:   |rex (?<track>(?<=title=)"(.*?)".*$) |stats count by track   This isn't showing a count of the songs, can anyone help me out to get this to work? Regex101 shows the regex as working to extract the song titles. Splunk does not give an error but the stats/viz panels show nothing. Example of raw log below.   { mediaId="NowPlayingId39" title="On The Ground" artist="ROSÉ" album="" duration=0 trackPosition=39/50 image=null } { mediaId="NowPlayingId40" title="Rollercoaster" artist="Bleachers" album="" duration=0 trackPosition=40/50 image=null } { mediaId="NowPlayingId41" title="Ghost of You" artist="Mimi Webb" album="" duration=0 trackPosition=41/50 image=null }      
Hello Team,   I am getting below warning. Is it harmful?   check_for_indexer_synced_configs default/inputs.conf will not be synced to indexers in Victoria. If this file is necessary on indexe... See more...
Hello Team,   I am getting below warning. Is it harmful?   check_for_indexer_synced_configs default/inputs.conf will not be synced to indexers in Victoria. If this file is necessary on indexers, configure the settings in the Splunk UI or via Admin Config
Hi Splunkers, In the "Architecting Splunk 8.0.1 Enterprise Deployments" coursework, we have been given a data sizing sheet to calculate everything in the coursework, but this sheet does not cover t... See more...
Hi Splunkers, In the "Architecting Splunk 8.0.1 Enterprise Deployments" coursework, we have been given a data sizing sheet to calculate everything in the coursework, but this sheet does not cover the frozen requirements. I have tested one of the examples that we had on the "Splunk sizing" website: https://splunk-sizing.appspot.com/ And it matches what was in the data sizing sheet, but I need to introduce the "frozen" to the table. What I have done to validate the calculations is as follows: For example, I have assumed: Daily Data Volume = 5GB Raw Compression Factor = 0.15 Metadata Size Factor = 0.35 Number of Days in Hot/Warm = 30 days Number of Days in Cold = 60 days Then for testing, I increased the Archived (Frozen) slider from 0 days to 9 months and then found that the "Archived" storage requirement is now 202.5 GB. My question is what is the calculation used here to determine the "Archived" storage requirement to be 202.5 GB in this case?  Thank you in advance.
I have a dashboard for common search query,  where i need to represent output of same search query in two time ranges. Time Range 1 and Time Range 2 added in input filters in dashboard. So now I am p... See more...
I have a dashboard for common search query,  where i need to represent output of same search query in two time ranges. Time Range 1 and Time Range 2 added in input filters in dashboard. So now I am planning to create a base search for two different time ranges. After adding like below in the dashboard, data is not coming in panels output and when am clicking on open in search, output count is showing in query search. So please help me fix this issue. Below are the screenshots and base search's for reference.  Base Searches :  <search id="base_search_1"> <query>index=xxx source=xxx </query> <earliest>$field1.earliest$</earliest> <latest>$field1.latest$</latest> </search> ======================== <search id="base_search_2"> <query>index=xxx source=xxx </query> <earliest>$field2.earliest$</earliest> <latest>$field2.latest$</latest> </search>