All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, I have a choropleth  dashboard with a divergent color mode set up. The dashboard uses only 2 fields to display results - count, country. I want to change the divergent color mode to colors of ... See more...
Hi, I have a choropleth  dashboard with a divergent color mode set up. The dashboard uses only 2 fields to display results - count, country. I want to change the divergent color mode to colors of my choosing.  (green, orange, red) I tried using "mapping.fieldColors", also tried using ""mapping.seriesColors", but nothing seems to work. What am i missing here? Thanks in advance.  
Hi,  I have a main search that generates counts of events table by date, UID and host something like for example: date UID host count 20201014 abc01 host1 25 20201015 abc01 host2 ... See more...
Hi,  I have a main search that generates counts of events table by date, UID and host something like for example: date UID host count 20201014 abc01 host1 25 20201015 abc01 host2 16 20201016 xyz01 host1 1   Then I generate additional fields from a sub-search by joining on those dates and UIDs.  The problem is, I need to dynamically perform the sub-search for earliest=-30d and latest=-3d based on the values of dates in each row from the main search. That is, the sub-search for the second row where dat=20201015 should only extract results from 30 days prior to 2020-10-15  (i.e. earliest=2020-09-15) upto 3days prior to 20201015 (i.e. latest=2020-10-12). Similarly, the sub-search for the third row should only extract results from 30 days prior to 2020-10-16 (i.e. earliest=2020-09-16) up to 3days prior to 2020-10-16 (i.e. latest=20201013).  How do I do that?  So far, I have done:    <main search> | eval date=strftime(_time, "%Y%m%d") ... | join type=inner date, uid, host [search index=subsearch_idx [| gentimes start=-30 end=-3 increment=1d | addinfo | eval earliest=info_min_time | eval latest=info_max_time | return earliest latest] continue_subsearch...] | continue_main_search       It doesn't seem to work however. How can I populate the dates for the sub-search dynamically based on the values of the date in the main search? Thank you for your time and help.  
i am unable to do the Splunk Course Fundamentals 1. It is showing "Your access to what is machine data is available shortly" I am unable to see the module details. Need help on this. Tried clearing c... See more...
i am unable to do the Splunk Course Fundamentals 1. It is showing "Your access to what is machine data is available shortly" I am unable to see the module details. Need help on this. Tried clearing cache , incognito mode too. Thanks in advance
Hi Splunkers,   I have a complex query to extract the IDs from first search and join it using that to the second search and then calculate the response times index=xxxml source=module "matrix-v4" ... See more...
Hi Splunkers,   I have a complex query to extract the IDs from first search and join it using that to the second search and then calculate the response times index=xxxml source=module "matrix-v4" NOT host="xyz.dmz" NOT "somefield1" NOT "somefield2" "<nt3:overall-outcome>*</nt3:overall-outcome>" |where isnotnull(AccuiteCode) |xmlkv | eval MSUserid=AccessCode | eval source1="MS" | join ip [search index=xxxml source="/var/log/production.log" urlPath="/com/system*" "/org/system" NOT "somefield1" NOT "somefield2" | where isnotnull(accessCode) | eval ProdUserID=accessCode| eval source2="Prod"] | where source1="MS" AND source2="Prod" | eval responsetime = Latency/1000 | stats count(responsetime) as Requests avg(responsetime) perc50(responsetime) perc60(responsetime) perc70(responsetime) perc80(responsetime) perc90(responsetime) perc95(responsetime) perc99(responsetime) max(responsetime) by date_mday,date_month,rIdentifier,nt3:overall-outcome,accessCode  This query only returns the first matched content but we have thousands to rows for the first query. It somehow unable to join it. Kindly advise.   Thanks, Amit
Hi I was been trying hard to extract the following data into a table with the column names failedTestCases(failedScenarios), nameOfTheTestScenario(name), passedTestCases(passedScenarios). And wanted ... See more...
Hi I was been trying hard to extract the following data into a table with the column names failedTestCases(failedScenarios), nameOfTheTestScenario(name), passedTestCases(passedScenarios). And wanted to have a column with successPercent and FailurePercent for each of the test scenario. Example Data: { e2eresult: { features: [ { failedScenarios: 0 name: TPAS Activation scenario with Port In[mocked] passedScenarios: 2 }, { failedScenarios: 0 name: TPAS Activation scenario[mocked] passedScenarios: 4 }, { failedScenarios: 0 name: TPAS Add A Line scenario[mocked] passedScenarios: 6 }, {}, {} ] project: test - automation status: Passed } } Here is what I have done something,  index=duck source=/var/log/containers/**.log | search "e2eresult" | eval _raw="{\"e2eresult\": [{\"features\":[{\"failedScenarios\":\"0\",\"name\":\"TPAS Activation scenario with Port In [mocked]\",\"passedScenarios\":2},{\"failedScenarios\":\"0\",\"name\":\"TPAS Activation scenario [mocked]\",\"passedScenarios\":4}]}]}" | eval all_fields=mvzip('e2eresult.features{}.failedScenarios', 'e2eresult.features{}.name', 'e2eresult.features{}.passedScenarios', ",") | fields all_fields | mvexpand all_fields | makemv delim="," all_fields | eval failedTestCases=mvindex(all_fields, 0) | eval nameOfTheTestScenario=mvindex(all_fields, 1) | eval passedTestCases=mvindex(all_fields, 2) | table failedTestCases, nameOfTheTestScenario, passedTestCases
If there's an error in a props.conf stanza for a particular sourcetype, where would it show up in the logs? E.g. a key like "SHOULD_LINEMERGE" is misspelled or one of the values is out of bounds or s... See more...
If there's an error in a props.conf stanza for a particular sourcetype, where would it show up in the logs? E.g. a key like "SHOULD_LINEMERGE" is misspelled or one of the values is out of bounds or something else where Splunk is having issues with the stanza... Where in the logs would this show up? My specific case: /opt/splunk/etc/slave-apps/_cluster/local/props.conf on the master (propagated to indexers): [sweeper:abcnews] SHOULD_LINEMERGE = false MAX_TIMESTAMP_LOOKAHEAD = 30 TIME_FORMAT = %Y-%m-%d %H:%M:%S,%3N TIME_PREFIX = ^ TRUNCATE = 100000 MAX_EVENTS = 10000 EXTRACT-sweeper_abcnews = (?s)^\d+-\d+-\d+\s+\d+\:\d+\:\d+\,\d+\s+(?P<module>\S+)\s+\[(?P<processID>.+?)\]\s+(?P<log_level>\S+):\s+(?P<message>.*)$ The primary purposes of the stanza in props.conf is to allow multiline, define event breaks (timestamps, basically) and extract fields. Splunk however appears to ignore the stanza altogether: multiline events get broken up, no fields are extracted. The field extraction regex works well elsewhere: tested via "rex" at search time, in "field extractions" at search time, and also in props.conf in a dev splunk instance. It's as if Splunk is ignoring the stanza altogether in the production instance. Why, and how do I troubleshoot this? Additional context: /opt/splunk/etc/deployment-apps/_server_app_Linux_Clients/local/inputs.conf in DS, distributed to clients: [monitor:///var/log/sweeper_abcnews.log] disabled = false index = sweeper sourcetype = sweeper:abcnews The logs are getting ingested - yet Splunk appears to ignore the relevant stanza in props.conf as if it doesn't exist. Other stanzas in props.conf seem to be working - as multiline events in other sourcetypes do not get broken up. Appreciate the help!
2020-10-19 05:00:03,744 INFO main() Deletion list: ['user1', 'user2', '$template', 'user233', 'svc_user1', ]  I have this log file that outputs a list of users to be deleted. I want to search this ... See more...
2020-10-19 05:00:03,744 INFO main() Deletion list: ['user1', 'user2', '$template', 'user233', 'svc_user1', ]  I have this log file that outputs a list of users to be deleted. I want to search this output and extract the users into fields then exclude the $template* & svc_users* users. I have tried this to extract the users to field.   search "list:" | eval del_users=split(_raw,"', '") | table del_users output looks like: 2020-10-19 05:00:03,744 INFO main() Deletion list: ['user1 user2 $template user233 svc_user1   Any suggestions to get a better output or how I should be doing this?
Having a look at the ServiceNow Security Operations Event Ingestion Addon for Splunk ES (https://splunkbase.splunk.com/app/4770/) and its a little bit light on doco and what is required to configure.... See more...
Having a look at the ServiceNow Security Operations Event Ingestion Addon for Splunk ES (https://splunkbase.splunk.com/app/4770/) and its a little bit light on doco and what is required to configure. Is this app capable of being standalone on Splunk? or does it need configuration on the ServiceNow side as well? specifically when I try and poll the API endpoint mentioned in the setup and scripts (/api/sn_sec_splunkes/notable_event_ingestion), it returns:       { "error": { "detail": null, "message": "Requested URI does not represent any resource" }, "status": "failure" }           Thoughts?
I have a data source that I have done a manual regex field extraction and all works fine.  Fields are correct and parsing the data as expected. A manual search is returning the result I expect when ... See more...
I have a data source that I have done a manual regex field extraction and all works fine.  Fields are correct and parsing the data as expected. A manual search is returning the result I expect when run in verbose mode however, if I run in Fast or Smart modes, the results I get back seem to be approximately 2 hours behind the current data shown in Verbose mode.  All the fields are there and I'm getting data back, it's just old data. The search is:       index=main host="my.host.name" sourcetype="ProcessedUser*" | fields _time,timeStamp,action,src_ip,src_mac,user | table timeStamp,action,user,src_ip,src_mac | head 5       While running in a normal search is not a problem as I can switch search modes, however when running in a dashboard is a problem as it does not use Verbose mode. Any suggestions greatly appreciated.  
Hi splunkers, We have integrated disk backup by default when we procure disks in server. For our indexers, shall we go for Raid10 but this will double our disk space? Or is it fine to have only Ra... See more...
Hi splunkers, We have integrated disk backup by default when we procure disks in server. For our indexers, shall we go for Raid10 but this will double our disk space? Or is it fine to have only Raid0 as we have integrated disk back up in place?  
Hi @links I have event with future year 2021, 2022. I need to add random months into the years. Do you know which syntax can be used on this ?
here is some sample data, can someone help me with a regular expression to extract the highlighted part "status:READY_TO_PROCESS" as process status   2020-10-18 14:06:18 [bp-[507bbd99]-completeMach... See more...
here is some sample data, can someone help me with a regular expression to extract the highlighted part "status:READY_TO_PROCESS" as process status   2020-10-18 14:06:18 [bp-[507bbd99]-completeMachineRun-233466] HitService [INFO] Created typed run Run: id=233467, uuid=7653767a-5e85-409d-aa3e-69bbeac40ad0 name=Final Results {size:0, status:READY_TO_PROCESS, rootRun:7653767a-5e85-409d-aa3e-69bbeac40ad0, data:}
Hello, I have started working on Splunk recently and have encountered a problem, I cannot find how to add a color (either green or red) to a cell in a table depending if it is "<" or ">". Most post... See more...
Hello, I have started working on Splunk recently and have encountered a problem, I cannot find how to add a color (either green or red) to a cell in a table depending if it is "<" or ">". Most post which I have read are either too complicated for me or are for numbers. I simply want to highlight the cell with the sign. I have 3 rows, the first and last are for number and the middle is the sign that i want to highlight. Is there a way in the Search page to do what i want ? Here is how I get the correct sign :  | eval operator_1 = if( Case1 > Case2 ,">", if(isnotnull(Case1) ,"<","") ) Thank you.
The application log I am working with has ISO 3166 country code but no latitude and longitude details. With that I am able to use a choropleth using the geom command easily using featureIdFIeld=coun... See more...
The application log I am working with has ISO 3166 country code but no latitude and longitude details. With that I am able to use a choropleth using the geom command easily using featureIdFIeld=countryname but I want to also visualize a cluster map also by country. Is there a way I can use geostats on this log without having latitude and longitude? 
How to upgrade Splunk Universal Forwarder to a New Version in Ubuntu Linux?
Hi! Given 2 events: SummaryDialog Component1=wxt_12 Component2=wyt_1 Component3=wzt_3 Component4=wbt_2 SummaryDialog Component1=wyt_2 Component2=wxt_12 Component3=wbt_2 Component4=wzt_1   I'm tr... See more...
Hi! Given 2 events: SummaryDialog Component1=wxt_12 Component2=wyt_1 Component3=wzt_3 Component4=wbt_2 SummaryDialog Component1=wyt_2 Component2=wxt_12 Component3=wbt_2 Component4=wzt_1   I'm trying to get a summary of the occurrences of each unique value regardless of the component: wbt_2 2 wxt_12 2 wyt_1 1 wyt_2 1 wzt_3 1 wzt_6 1 Naively, I hoped this would work: index=cls_preprod SummaryDialog | stats count by component*   It does not (returns no results).  Does anyone have any suggestions?  I've been googling for awhile and have not hit upon a viable solution. Note there a N number of components Thanks! (and forgive me if this is a basic question.. i am very basic splunk user)  
Greetings... I have a table that looks like: Timestamp | Action | User YYYY-MM-DD HH:MM:SS| Fail | User1 YYYY-MM-DD HH:MM:SS | Succeed| User2 YYYY-MM-DD HH:MM:SS| Succeed| User1 YYYY-MM-DD HH... See more...
Greetings... I have a table that looks like: Timestamp | Action | User YYYY-MM-DD HH:MM:SS| Fail | User1 YYYY-MM-DD HH:MM:SS | Succeed| User2 YYYY-MM-DD HH:MM:SS| Succeed| User1 YYYY-MM-DD HH:MM:SS| Succeed| User1 YYYY-MM-DD HH:MM:SS| Fail| User2 Is there a way to break this down into separate tables by User such that: YYYY-MM-DD HH:MM:SS| Fail | User1 YYYY-MM-DD HH:MM:SS| Succeed| User1 YYYY-MM-DD HH:MM:SS| Succeed| User1 YYYY-MM-DD HH:MM:SS | Succeed| User2 YYYY-MM-DD HH:MM:SS| Fail| User2
I am using linux rsyslog server to capture syslog from Cisco ASA firewall and send to the splunk using the universal forwarder. I have two syslog servers behind a load balancer for redundancy. The pr... See more...
I am using linux rsyslog server to capture syslog from Cisco ASA firewall and send to the splunk using the universal forwarder. I have two syslog servers behind a load balancer for redundancy. The problem I am facing is I  am missing a lost of logs in syslog server. I know syslog use UDP traffic which is unreliable. Is there any way I can troubleshoot this issue. Is there any other better method l  can collect this syslog. I tried to send syslog to to splunk directly still I can see missing logs.
Hi All, I have below table type data in _raw and i want to extract fields. Example _raw as below Name       ID         Age Harry        AAA     23 Will           BBB       27 Brian        CCC  ... See more...
Hi All, I have below table type data in _raw and i want to extract fields. Example _raw as below Name       ID         Age Harry        AAA     23 Will           BBB       27 Brian        CCC      30   Expectation is  like below. I want 3 fields (as no.of columns) and it should list like below.  if i do (...|table Name,ID,Age) it should show like below Name ID Age Harry AAA 23 Will BBB 27 Brian CCC 30 Thanks in advance
We are planning to have DS linux based which would be used to deploy majorly on Windows servers and few Linux based servers. Are there any exceptions where DS has to be kn Windows to push apps? Wil... See more...
We are planning to have DS linux based which would be used to deploy majorly on Windows servers and few Linux based servers. Are there any exceptions where DS has to be kn Windows to push apps? Will my Linux based  DS suffice for all(windows+linux) UFs??? We would be using DS to deploy only on UFs, no indexers, search heads in the context.