All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I want to set up an alert  when the server status is not "HEALTH_OK" for three consecutive times in a row. Any pointers on how to schedule such search. I receive data every two minutes. Send... See more...
I want to set up an alert  when the server status is not "HEALTH_OK" for three consecutive times in a row. Any pointers on how to schedule such search. I receive data every two minutes. Send Alert If Comments/Alert Frequency Status is not "RUNNING" or State is not "HEALTH_OK" for 3 times consecutively 1 alert every 2 min, for first 10 min, then 1 alert every 30 min
Hello Everyone,  Need to add table results into this image: Can anyone help!
What is the difference between `... | when match(a,b)` and `...| search match(a,b)`? Why in such cases `when` works and `search` does not? 
I have a stream of events that have names and each name belongs to a certain category. For this example, it will be two category: "24x7" and "custom". There are 2 lookup tables: NoEventDates (a... See more...
I have a stream of events that have names and each name belongs to a certain category. For this example, it will be two category: "24x7" and "custom". There are 2 lookup tables: NoEventDates (aka Holiday table) and ZeroEvents. ZeroEvents table has subset of all possible event names with additional parameters:  Category event HourFrom HourTo HolidaysOff DaysOfWeekOff custom Event11 11 12 Y   custom Event12     N 0,6 custom Event13     N   custom Event14     Y 5.6 24X7 Event21 0 24     24X7 Event22 0 24     24X7 Event23 0 24        "24X7" events are expected within every 15-min all day long without holidays or weekends. Custom event can have days of year (holidays) and/or days of week (such as weekend) when no events are expected. Every day a custom event is expected it would come for sure only during the specific time range.  The task is to discover "missing" events situation as quickly as possible. Custom events will be monitored every 15-min by sliding 2-hour window within their prescribed hours. For "24X7" I have the following query: index=... [| inputlookup ZeroEvents.csv | where Category="24X7" | fields event | format] | stats count as eventscount by event | append [| inputlookup ZeroEvents.csv | search DeliveryMethod="24X7" | fields event | eval eventscount=0 ] | stats sum(eventscount) as total by event | where total < 1 | stats count as number | eval NetcoolTitle=number + " 24X7 events with no messages" I did not need to use Holiday table for that case. For custom events it gets more complicated  and I'm stuck trying to find a way not to repeat all conditions twice. Here's is the structure with one part of the "append query" hard-coded: index=... [| inputlookup ZeroEvents.csv | where DeliveryMethod="Batch" | fields event| format] | eval date=strftime(_time,"%Y-%m-%d") | lookup NoEventDates.csv NEDate as date OUTPUT NEDate as Holiday | eval Holiday=if(isnull(Holiday), "N", "Y") | eval DOW=strftime(_time,"%w") | eval currentHour=strftime(now(), "%H") | lookup ZeroEvents.csv event OUTPUT HolidaysOff DaysOfWeek HourFrom HourTo | where NOT match(DaysOfWeek, DOW) AND (Holiday="N" OR HolidaysOff="N") AND currentHour >= HourFrom AND currentHour <= HourTo | stats count as eventscount by topic | append [| inputlookup ZeroEvents.csv | eval DOW="0", Holiday="N", currentHour=1 | where DeliveryMethod="Batch" AND NOT match(DaysOfWeek, DOW) AND (Holiday="N" OR HolidaysOff="N") AND currentHour >= HourFrom AND currentHour <= HourTo | eval eventscount=0 | fields topic eventscount] | stats sum(eventscount) as total by events | where total < 1   As I mentioned, `eval DOW="0", Holiday="N", currentHour=1` should be either recalculated using the same logic or I need somehow to use variables from the outer scope. Is there a simpler way to write such lookup-based queries? Is there a solution without a massive code duplication for "custom" events?
Hi. I have a splunk table which tracks  all the plugin version available to install for each plugin. Please note that each plugin can have multiple values. The idea is to alert when a new  version be... See more...
Hi. I have a splunk table which tracks  all the plugin version available to install for each plugin. Please note that each plugin can have multiple values. The idea is to alert when a new  version becomes available for any of the plugins(latest entry). the search runs every 5 mins.     Kindly help.  
Hi @gcusello , Is it possible to run a SQL query from Splunk search bar to a SQL server? i.e. I want to run a SQL query against server abc1sql07. Is this possible ? if so, what permission do we nee... See more...
Hi @gcusello , Is it possible to run a SQL query from Splunk search bar to a SQL server? i.e. I want to run a SQL query against server abc1sql07. Is this possible ? if so, what permission do we need to setup on SQL server to ensure Splunk has permission to query the database? Regards, Rahul
Hi, I'm trying to create some test data which contains some JSON embedded in it. I'm then trying to extract the JSON and display it, which is working with the following search string:     | maker... See more...
Hi, I'm trying to create some test data which contains some JSON embedded in it. I'm then trying to extract the JSON and display it, which is working with the following search string:     | makeresults | eval _raw="018-07-13 05:48:30.343 PDT [pool-3-thread-3] INFO STATUS - {\"well_formed_json\": \"yes\"}" | rex field=_raw "INFO STATUS - (?<json>.*)"| rename json as _raw     However the results are displayed in a table. I'd like the results to be displayed in a list view with with color-coding,  nested levels, etc. Is this possible? 
Hi All, I have a table something like this MetricID                             Count                     Percent AA                                        1404                       4% BB       ... See more...
Hi All, I have a table something like this MetricID                             Count                     Percent AA                                        1404                       4% BB                                         256                         13% CC                                         749                        31% Now, for each MetricID, I have some condition and based on that condition I would like the background color of Count and Percent columns to be changed to Green, Yellow and Red. Is it possible to do with Splunk? Essentially I don't want to change the color of entire Count of Percent column, rather based on the condition for each MetricID, I would like to change the color for respective Count and Percent values. Thanks in advance!!
I've tried to follow others posts as well as the documentation here and I've come up empty. I have a bunch of device enrollment events in my index and I want to filter out only those events that are ... See more...
I've tried to follow others posts as well as the documentation here and I've come up empty. I have a bunch of device enrollment events in my index and I want to filter out only those events that are happening by users in our Pilot group listed in a lookup table. index data looks like this:   DeviceFriendlyName: DeviceMobile-Serial1234 DeviceId: 132483 EnrollmentEmailAddress: user@company.com EnrollmentStatus: Enrolled EnrollmentUserId: 123 EnrollmentUserName: mobileUsername EventId: 148 EventTime: 2020-07-13T22:54:04.4612316Z EventType: MDM Enrollment Complete   My lookup table is simply a list of: Full Name E-mail Address   I want to just see events where the EnrollmentEmailAddress matches an email listed in the "E-Mail Address" of the lookup table.   index=myindex source=mysource sourcetype=mysource_type EventId="148" | search [| inputlookup pilot_users.csv | rename "E-Mail Address" as EnrollmentEmailAddresss ] | table EnrollmentEmailAddress, EventId  
Using the details in the cisco umbrella add-on for splunk.  The pull-umbrella-logs.sh runs fine manually as the user, splunk.  The sync will connect and pull logs with no issue.  However, when left t... See more...
Using the details in the cisco umbrella add-on for splunk.  The pull-umbrella-logs.sh runs fine manually as the user, splunk.  The sync will connect and pull logs with no issue.  However, when left to run automated via splunk local inputs.conf.  It cannot connect and fails to ingest any data.  Splunkd log entry: ERROR ExecProcessor - message from "/opt/splunk/etc/apps/TA-cisco_umbrella/bin/pull-umbrella-logs.sh" fatal error: Could not connect to the endpoint URL As far as I can tell, splunk should be the using the same user (splunk) in order to run the script automatically.  What might I be missing?  
Hi, I'm pretty new to splunk and hoped to gain some more experience by attempting to complete the Boss of the SOC v3 challenge. I have splunk installed on Ubuntu per the instructions on the github pa... See more...
Hi, I'm pretty new to splunk and hoped to gain some more experience by attempting to complete the Boss of the SOC v3 challenge. I have splunk installed on Ubuntu per the instructions on the github page. I have also downloaded and extracted the dataset but when I try to start splunk again, i get the following error message:   homePath='/opt/splunk/etc/apps/botsv3_data_set/var/lib/splunk/botsv3/db' of index=_botsv3 on unusable filesystem. Validating databases (splunkd validatedb) failed with code '1'. If you cannot resolve the issue(s) above after consulting documentation, please file a case online at http://www.splunk.com/page/submit_issue   I've already changed the splunk-launch.conf file by adding the OPTIMISTIC_ABOUT_FILE_LOCKING = 1 but I still get the same message. Any tips on resolving this issue?
Hello, I have SPL that when opened into a search from the dashboard has good working SPL, for example | rex field=_raw "\"stuff\"+\smaximum=\"100\"\>(?P<Score>[^\<]*)" in simple XML (when editing i... See more...
Hello, I have SPL that when opened into a search from the dashboard has good working SPL, for example | rex field=_raw "\"stuff\"+\smaximum=\"100\"\>(?P<Score>[^\<]*)" in simple XML (when editing in the webUI 'source' and when opening the XML files in an editor) some of the characters get garbled. | rex field=_raw "\"stuff\"+\smaximum=\"100\"\&gt;(?P&lt;Score&gt;[^\&lt;]*)" Seems that the ">" gets garbled into "&gt;" and "<" into "&lt;"   Another example is " | rex field=Message "Member:\s(?P<UserAdd>[\s\S]*?Account Name" the < and > get mutated to: rex field=Message "Member:\s(%3FP&lt;UserAdd&gt;[\s\S]*%3F)Account Name"   So ? is %3F < is &lt; > &gt;  
Hello there, I'm trying to monitor inputs files which have spaces in it which are in the below format but not all of the files are being captured .  Kindly advise me if my inputs need to be correct... See more...
Hello there, I'm trying to monitor inputs files which have spaces in it which are in the below format but not all of the files are being captured .  Kindly advise me if my inputs need to be corrected in any way. E:\files\xxxxx\eeeee\Error_20200528173833_70515016_ssss yyyy  Planning Mapping - Update Dimensions.log E:\files\xxxx\eeee\Error_20200615161548_31008196_ Origination Dimension Build.log My current configuration: [monitor://E:\files\xxxx\eeee\Error_*] index = <myindex> disabled = false sourcetype = <mysourcetype> ignoreOlderThan = 30d crcSalt = <SOURCE>
Hi Guys, I know this seems very sill query but I am looking this in urgency and I don't have much time to create it from my side as of now as I am travelling. I have the Splunk presentation ( like A... See more...
Hi Guys, I know this seems very sill query but I am looking this in urgency and I don't have much time to create it from my side as of now as I am travelling. I have the Splunk presentation ( like Architecture, components, etc) and training on next week. So if you guys can help me to get/provide this, it would really fruitful for me.
| stats sum(Score) AS TotalScore, values(value1) AS value1, values(value2) AS value2, values(value3) AS value3, by Username How can I just add all fields so they're available in an alert, such as se... See more...
| stats sum(Score) AS TotalScore, values(value1) AS value1, values(value2) AS value2, values(value3) AS value3, by Username How can I just add all fields so they're available in an alert, such as sending an email?
I have a CSV file with a column labeled published. Timestamp values in that field are listed like so:  2020-07-01T01:17:02.649Z I'm trying to use the "published" column as _time for some dashboardi... See more...
I have a CSV file with a column labeled published. Timestamp values in that field are listed like so:  2020-07-01T01:17:02.649Z I'm trying to use the "published" column as _time for some dashboarding and I'm using: | inputlookup file.csv | eval _time=strptime("published","%Y-%m-%dT%H:%M:%S.%N") However, when I run a time chart search it doesn't return any data. Is my eval command formatted correctly or is there something else I'm missing?
I am new to Splunk, I am trying to get results in the below pattern. Any help is appreciated. Lets say I am doing search for last 1 hour. I want to get only the results from last week and last 3 wee... See more...
I am new to Splunk, I am trying to get results in the below pattern. Any help is appreciated. Lets say I am doing search for last 1 hour. I want to get only the results from last week and last 3 weeks and show the average of those. For example I am doing search at 11 AM today for last 1 hour time frame. I want to get the results of  only 10 -11 AM every day of last 1 week and 10 - 11 AM of last 3 weeks. And show the average of those. I tried earliest and latest time ranges also tried time chart with the search but not successful. 
  index=server sourcetype=logtype search_string!="" action=search [search index=app userID=* pageID=alphnum1234 | dedup userID | table userID] |<regex field definitions including #of total search |t... See more...
  index=server sourcetype=logtype search_string!="" action=search [search index=app userID=* pageID=alphnum1234 | dedup userID | table userID] |<regex field definitions including #of total search |tableresults returned> |transaction maxspan=1h maxpause=15m userID mvlits=true |search totalHits=* search_string=* |eval search_transaction=mvjoin(search_string,",") |table _time,userID,search_transaction,totalHits,....   So, I'm not certain I am taking the best approach. Maybe if I just describe what I'm trying to do, someone in the community will have a better idea. - Problem: I have two applications, one called search and another called pageviewer. To a user, they don't realise the difference. However, in the data, the actions in search and the pageviewer page loads are two different events happening near the same time. My goal is to have the list of search strings that lead users to a page, so that I can prepare a report by pageId with a list of key terms.  - Today, I am using a transaction command to group searches by user. However, I only want searches from users that viewed the page of interest. My trouble, using my current method, is that the users can view the page any time and I am only interested in their search values if it is near the same time they viewed the page.   - Code:  index=server sourcetype=logtype search_string!="" action=search [search index=app userID=* pageID=alphnum1234 | dedup userID | table userID] |<regex field definitions including #of total search |tableresults returned> |transaction maxspan=1h maxpause=15m userID mvlits=true |search totalHits=* search_string=* |eval search_transaction=mvjoin(search_string,",") |table _time,userID,search_transaction,totalHits,....   - My problem here is that a user could view a page at any time, so if I'm looking across 30 days of events, if that user viewed the page once in the 30 days but also 10 others pages on different days, then I get all of the search results not just the ones near the time the page of interest was opened. This leads to lots of irrelevant results         
in splunkd events on indexers such as this:   07-13-2020 11:42:03.337 -0700 WARN DateParserVerbose - Failed to parse timestamp in first MAX_TIMESTAMP_LOOKAHEAD (128) characters of event. Defaulting... See more...
in splunkd events on indexers such as this:   07-13-2020 11:42:03.337 -0700 WARN DateParserVerbose - Failed to parse timestamp in first MAX_TIMESTAMP_LOOKAHEAD (128) characters of event. Defaulting to timestamp of previous event (Mon Jul 13 11:42:02 2020). Context: source=/Library/Application Support/Symantec/Silo/NFM/LiveUpdate/Logs/lux.log|host=mac_mini04|symantec:silo:NFM=LiveUpdate:lux|233394     host = splunk_indexer_01 | source = /opt/splunk/var/log/splunk/splunkd.log | sourcetype = splunkd   ... it does not look like fields in the "Context: " portion of the events are extracted:   Context: source=/Library/Application Support/Symantec/Silo/NFM/LiveUpdate/Logs/lux.log|host=bpa-mit-mini04|symantec:silo:NFM=LiveUpdate:lux|233394   Do I need to manually extract them via rex? If so - has anyone done this and could perhaps share a template rex command for this event type? If not, what's the best practice? Thank you! P.S. Something like this?   index=_internal sourcetype=splunkd "Context: " | rex field=event_message "Context\:\s+(?P<Context>source\=(?P<context_source>\S+?)?[\||$]host\=(?P<context_host>\S+?)(?:\|(?P<context_tail>.*))$)"  
Since upgrading the Splunk_TA_microsoft-cloudservices, I have been getting the following error:  Unable to initialize modular input "mscs_storage_table" defined in the app "Splunk_TA_microsoft-cloud... See more...
Since upgrading the Splunk_TA_microsoft-cloudservices, I have been getting the following error:  Unable to initialize modular input "mscs_storage_table" defined in the app "Splunk_TA_microsoft-cloudservices": Introspecting scheme=mscs_storage_table: script running failed (exited with code 1) Any suggestions on how to fix this?