All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

We are in the midst of a migration from one server to the next, and need to see if there are queries running against specific indexes, virtual indexes and sourcetypes. I have been trying a number of ... See more...
We are in the midst of a migration from one server to the next, and need to see if there are queries running against specific indexes, virtual indexes and sourcetypes. I have been trying a number of queries against the audit log but can't find a way to extract the following information used by all active queries & reports. 1. name and count of indexes   2. name and count of virtual indexes 3. name and count of sourcetypes Been searching for hours, any help appreciated. 
Hi, we have trouble seeing the data, sent by syslog in format cef, from the imperva to splunk. we have Splunk Add-on for Imperva SecureSphere WAF installed. thanks for your quick response,   r... See more...
Hi, we have trouble seeing the data, sent by syslog in format cef, from the imperva to splunk. we have Splunk Add-on for Imperva SecureSphere WAF installed. thanks for your quick response,   regards
I have a macro created already in a app. Now, I need to change the name of the macro. I couldn't find any option to rename the macro. Is there is any way to rename the macro? Thanks in advance
I’m trying to write a query that breaks out by index all searches that look back in certain day increments. Basically, I want to determine if users are actually writing searches that are querying the... See more...
I’m trying to write a query that breaks out by index all searches that look back in certain day increments. Basically, I want to determine if users are actually writing searches that are querying the 90 days of data retention we are currently setup for, or are users only looking back less than the 90 days consistently. I would like to display the results in this format or something similar. I’m not very strong with the SPL currently, so any advise or help is much appreciated. Thanks in advance. Index 1         7 day searches      14 day searches        21 day searches          28 day searches     90 day searches
Hi, Could anyone tell me how to monitor SHs using website monitoring app that installed in HF. I am getting below error uiHelper submitValueEdit operator failed for endpoint_base=data/inputs/web_p... See more...
Hi, Could anyone tell me how to monitor SHs using website monitoring app that installed in HF. I am getting below error uiHelper submitValueEdit operator failed for endpoint_base=data/inputs/web_ping Please let me know if any proxy details or other details are to be added (we are able to ping SH urls from HF manually).
Hi all,  I am new to Splunk and trying here to parse decoded HTTP data  to table with unique fields like "Method", "URI", "Host", "X-Forwarded-IP" etc I order to achieve this I was thinking to set... See more...
Hi all,  I am new to Splunk and trying here to parse decoded HTTP data  to table with unique fields like "Method", "URI", "Host", "X-Forwarded-IP" etc I order to achieve this I was thinking to set unique separators between fields and values but this is as far as I've got. Any suggestions how to do this better and elegant are welcome. One of the other issues is that not every request is gonna have same set of fields so have in mind that it can be variable, however majority is gonna be the same. Thanks SPL: index="index2" EventType=type2 | base64 field=RequestContent action=decode mode=replace suppress_error=True | rex field=RequestContent mode=sed "s/\\\x0d\\\x0a/\n/g" | rex field=RequestContent mode=sed "s/ \//\nURI::/g" | rex field=RequestContent mode=sed "s/ HTTP Version\//\nHTTP::/g" | rex field=RequestContent mode=sed "s/\n\n/\n/g" | rex field=RequestContent mode=sed "s/\n{/\nOther Info::{/g" | rex field=RequestContent mode=sed "s/\n</\nOther Info::</g" | rex field=RequestContent mode=sed "s/: /::/g" | dedup RequestContent | where RequestContent!="None" | eval RequestContent = "Method::".RequestContent | rex field=RequestContent mode=sed "s/\n/#/g" | table RequestContent   Original Request GET /favicon.ico HTTP/1.1\x0d\x0aHost: 1.1.1.1\x0d\x0aX-Real-IP: 2.2.2.2\x0d\x0aX-Forwarded-For: 185.1.1.1\x0d\x0aX-Forwarded-Proto: https\x0d\x0aX-Forwarded-Port: 443\x0d\x0aX-Forwarded-Host: 2.2.2.2\x0d\x0aAccept: image/webp,image/apng,image/*,*/*;q=0.8\x0d\x0aCookie: IO_id_NewSearch_90_84_245_165=778528c061e04a3facd579a51c1ec341; IO_idts_NewSearch_90_84_245_165=1591695485841; bb96b56e607644689f860e05a8e775ef=WyIzODcyOTc3OTMyIl0; IO_refts_NewSearch_90_84_245_165=1592407882547; APP_LANG=el-gr; APP_REGION=gr; IO_idvc_NewSearch_90_84_245_165=16; IO_viewts__90_84_245_165=1592408854137; IO_viewts_NewSearch_90_84_245_165=1592408854138\x0d\x0aPragma: no-cache\x0d\x0aReferer: https://90.84.245.165/\x0d\x0aUser-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.106 Safari/537.36\x0d\x0aCache-Control: no-cache\x0d\x0aSec-Fetch-Dest: image\x0d\x0aSec-Fetch-Mode: no-cors\x0d\x0aSec-Fetch-Site: same-origin\x0d\x0aAccept-Encoding: gzip, deflate, br\x0d\x0aAccept-Language: en-US,en;q=0.9,zh-TW;q=0.8,zh;q=0.7\x0d\x0aVia: proxy A\x0d\x0a\x0d\x0a Modified Request by SPL Method::GET#URI:::HTTP/1.1#Host::1.1.1.1#X-Real-IP::2.2.2.2#X-Forwarded-For::2.2.2.2#X-Forwarded-Proto::https#X-Forwarded-Port::443#X-Forwarded-Host::1.1.1.1#Accept::text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9#Cookie::HW_id_NewSearch_90_84_245_165=778528c061e0a51c1ec341; IO_idts_NewSearch_90_84_245_165=15485841; bb96b5775ef=WyIzODcyOTc3OTMyIl0; IO_refts_HuaweiSearch_90_84_245_165=1592407882547; APP_LANG=el-gr; APP_REGION=gr; HW_idvc_HuaweiSearch_90_84_245_165=16; IO_viewts__90_84_245_165=1592408854137; IO_viewts_NewSearch_90_84_245_165=1592408854138#Referer::http://www.more.org/showconfirmpage/?url=https://1.1.1.1#User-Agent::Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.106 Safari/537.36#Sec-Fetch-Dest::document#Sec-Fetch-Mode::navigate#Sec-Fetch-Site::cross-site#Sec-Fetch-User::?1#Accept-Encoding::gzip, deflate, br#Accept-Language::en-US,en;q=0.9,zh-TW;q=0.8,zh;q=0.7#Upgrade-Insecure-Requests::1#Via::proxy A# Method::POST#URI::getNewList/v1 HTTP/1.1#Host::noname-dre.dt.noname.com#X-Real-IP::23.3.3.3#X-Forwarded-For::21.9.9.30#X-Forwarded-Proto::https#X-Forwarded-Port::443#X-Forwarded-Host::searchnews-dre.dt.noname.com#Content-Length::415#Authorization::SDK-HMAC-SHA256 Access=183b7bff5e48403c8c07e07, SignedHeaders=content-type;hmactoken;host;x-sdk-date, Signature=e73e171196bf221d08b7a2e365607b751d0f25f2e88d4d892#X-Sdk-Date::202T150331Z#hmacToken::VqP83hXcAq/TqRFOarchlCtFh5G+o=#Content-Type::application/json#Accept-Encoding::gzip#User-Agent::okhttp/3.12.0#Other Info::{"transId":"961b5d9720db4078b8349ec","version":"10.1.2.200","deviceId":"4ff2e1c83f3b43a693bee925146c5af4","userId":"5190064000024056394","serviceToken":"","lang":"zh-cn","phoneModel":"JNY","locale":"cn","net":"1","sysVer":"EmotionUI_10.1.0","ts":"159259","cmdId":"refresh","cmdVer":null,"userGrant":null,"channelId":"topNews","region":null,"newsCount":"10","pageNumber":1,"lastExposeItems":null}
We have a web api that orchestrates calls to other services. So for example we may have an incoming call to `/api`, which then may do 3 calls to a system of record. These executions are ending up in ... See more...
We have a web api that orchestrates calls to other services. So for example we may have an incoming call to `/api`, which then may do 3 calls to a system of record. These executions are ending up in splunk in a single metric (mostly for ease of investigation in other avenues). So we have events that look like the following:     { SOR.Executions.0.Operation: "GetInfo", SOR.Executions.0.TimeInMs: 321, SOR.Executions.1.Operation: "UpdateRecord", SOR.Exectuions.1.TimeInMs: 234, SOR.Executions.2.Operation: "DoSomethingElse", SOR.Exectuions.2.TimeInMs: 532, }       I've been able to successfully extract singular values from these via a search such as     index="docker" | fields SOR.Executions.*.Operation | foreach SOR.Executions.*.Operation [eval Operation=mvappend(VcmRequest, '<<FIELD>>')] | mvexpand Operation | fields Operation | stats count by Operation | sort count desc       This works well for retrieving singular values without correlations (and allows for nice pie charts for how often specific operations are performed) but now I want to get percentile timings of each individual operation.  So for example I want to run statistics on how often a "GetInfo" operation takes vs how often a "DoSomethingEsle" operation takes.    The problem is that the number of SOR calls being made varies depending on input to the API call.  We may end up with 3 SOR calls if 1 customer is passed in, or we may end up with 8 calls if 5 customer ids are passed in.    My initial thought was to do an eval to grab the field name and put it in a multi-value field, then do a `rex` to pull out the execution digit (the dynamic part), tokenizing that, then doing a subsearch based on that.  However, that's getting to things I can't find documentation of (pulling out field names for example) and is going into a complex rabbit hole.   Is there an easier avenue to do this that I'm missing?
Initially I have query with successful VPN user logings.(usernames) Now I want to get the max(high) nubmber of users per day per month for 3 months. so initially how to extract the date and month an... See more...
Initially I have query with successful VPN user logings.(usernames) Now I want to get the max(high) nubmber of users per day per month for 3 months. so initially how to extract the date and month and then count the users by day by moth.
Hi all, can you please help me I am calculating Shannon Entropy values for domains from single index and have two questions.  1) Below SPL works well but calculates Shannons Entropy only for the ... See more...
Hi all, can you please help me I am calculating Shannon Entropy values for domains from single index and have two questions.  1) Below SPL works well but calculates Shannons Entropy only for the first 100 domain entries. Is there way to mitigate that or its Splunk limitation 2) Is there more elegant way to use mvexpand in order that I dont have to calculate entropy twice to get results since `ut_shannon(domain)` result output looks like attached in picture. Hope this makes sense. SPL: index="index1" sourcetype=sourcetype1 earliest=-24h | fields Domain | stats values(Domain) as domain | `ut_shannon(domain)` | fields domain | mvexpand domain | rename domain as col1 | appendcols [search index="index_sdas" sourcetype=ST_SDMP_SDAS earliest=-24h | fields Domain | stats values(Domain) as domain | `ut_shannon(domain)` | fields ut_shannon | mvexpand ut_shannon | rename ut_shannon as col2] | where col2 > 4 | table col2 col1 | rename col2 as ShannonEntropy, col1 as Domain | eval ShannonEntropy = substr(ShannonEntropy,1,7) Thanks
Currently my query uses dedup to remove identical events: dedup comp_id _time Is there an alternative to dedup to only see unique events?  
Hello everyone, We have configured some automatic field extractions using regular expressions on some logs that can get really big. These field extractions are very important, if they fail we are mi... See more...
Hello everyone, We have configured some automatic field extractions using regular expressions on some logs that can get really big. These field extractions are very important, if they fail we are missing critical information in our daily monitoring. At some point the field extraction didn't work and we realized that was because of the regex depth limit, when we ran it manually with rex and we got     Streamed search execute failed because: Error in 'rex' command: regex="<the_regex>" has exceeded the configured depth_limit, consider raising the value in limits.conf.     We fixed it by optimizing the regex and now it's working fine. But we cannot be sure that the issue has been absolutely fixed, it could potentially happen again in the future. We would like to configure a Splunk alert that warns us when this type of error occurs. Does Splunk log anything about this type of error ? We could not find anything in _internal or anywhere else, or maybe we didn't look correctly ? Thank you for your help !
Hi All, I am trying to substract values (timestamps) of an mv field, but they are of differing lengths; ## example data: sysmodtime,idnumber,epoch time 05/03/20 12:40 PM,1,1588502400 05/01/20 12... See more...
Hi All, I am trying to substract values (timestamps) of an mv field, but they are of differing lengths; ## example data: sysmodtime,idnumber,epoch time 05/03/20 12:40 PM,1,1588502400 05/01/20 12:01 AM,1,1588284060 05/01/20 12:02 AM,1,1588284120 05/01/20 12:02 AM,1,1588284120 05/02/20 12:00 PM,2,1588413600 04/02/20 12:00 AM,2,1585778400 04/02/20 01:00 AM,2,1585782000 04/02/20 02:00 AM,3,1585785600 ##desired outcome: = new field time-diff at the end: sysmodtime,idnumber,epoch time,time_diff 05/03/20 12:40 PM,1,1588502400,218340 05/01/20 12:01 AM,1,1588284060,-60 05/01/20 12:02 AM,1,1588284120,0 05/01/20 12:02 AM,1,1588284120,empty 05/02/20 12:00 PM,2,1588413600,2635200 04/02/20 12:00 AM,2,1585778400,-3600 04/02/20 01:00 AM,2,1585782000,empty 04/02/20 02:00 AM,3,1585785600,empty ------------------------------- The original data is about 200.000 rows long, so we are looking for a structural solution. Is there a simple way to loop through the timestamp value inside the mvfield and substract it and place it in a new field Any suggestions would be very welcome, Cheers, Roelof
Hi, I have an an indexA which have logs which includes data in each raw like: User=11111,Language=English,Usage=btn_section1_1,Experience=btn_section2_4,Problems=btn_section4_8 Fields are extracte... See more...
Hi, I have an an indexA which have logs which includes data in each raw like: User=11111,Language=English,Usage=btn_section1_1,Experience=btn_section2_4,Problems=btn_section4_8 Fields are extracted well here. now I have imported to splunk using DB Connect (inputs) from sql server, a dictionary of key-value of "btns" and it stored on indexB, with 2 fields for each event, looks like: Key=btn_section1_1, Value="Often" Key=btn_section2_4, Value="All OK" ... and so on... Now I need to achieve in a query to join both indexes  and then to have a table for each event, like below where I got the value from the IndexB which translate the actual "btn": User   Language  Usage       Experience 1111  English      "Often"     "All OK" I appreciate for help Thanks!
Hi,   I have two fields as Created_time and Updated_time. Example:  Created_time   ----    Updated_time 9.15am      ----    10.35am   Is it possible to bring both the field values on x-axis (l... See more...
Hi,   I have two fields as Created_time and Updated_time. Example:  Created_time   ----    Updated_time 9.15am      ----    10.35am   Is it possible to bring both the field values on x-axis (like a range....like first value of x-axis is 9.15am it shows a particular count and the second value of x-axis is 10.35am showing count)   Is this possible. Kindly help me with this
GM! We currently have Splunk 7.2.3 and there is a STIG requirement to turn on the FIPS setting. According to the STIG, the only way to turn it on is to reinstall or upgrade the software.  Is that co... See more...
GM! We currently have Splunk 7.2.3 and there is a STIG requirement to turn on the FIPS setting. According to the STIG, the only way to turn it on is to reinstall or upgrade the software.  Is that correct? If I choose to reinstall 7.2.3 without first uninstalling it, will that work?  What is the Windows command to query the FIPS status on the Splunk server?
Hello, I am using a dropdown with a dynamic option search   | inputlookup serverlocations.csv   field for Label: locationname field for Value: servername   The serverlocations.csv looks like ... See more...
Hello, I am using a dropdown with a dynamic option search   | inputlookup serverlocations.csv   field for Label: locationname field for Value: servername   The serverlocations.csv looks like this in a regular splunk search: locationname servername UK-London server1.example.com DE-Berlin server1.example.com US-NewYork server2.example.com   The problem is my dropdown shows only the Labels UK-London and US-NewYork. It removes DE-Berlin from the dropdown as if my search would be   | inputlookup serverlocations.csv | dedup servername   But actually I want all three locationnames in my dropdown. I am totally fine if I get the same search results on the dashboards then, because both are using the same servername. I do not understand why splunk is handling my search with a dedup, especially because my search result is looking good as long as it is not used by the dropdown. Do you have a reason for that behaviour or can you tell me how to avoid that?
Hello, I have a lookup that will only have one column (MY_COL), this column will always have at least one row but could have multiple. I am trying to take the value of the row(s) and use them in a s... See more...
Hello, I have a lookup that will only have one column (MY_COL), this column will always have at least one row but could have multiple. I am trying to take the value of the row(s) and use them in a search query like this index=my_index RuleID=(INSERT LOOKUP VALUES HERE, IF MULTIPLE MAKE IT AN OR STATEMENT) | table RuleID, etc, etc, Is there a clean way to do this? Thanks in advance!
I am taking the Fundaments 1 course loaded the module 4 data files and had the 239,625 events loaded as per the lab documentation.  However, in module 5 when attempting to query this data, nothing wa... See more...
I am taking the Fundaments 1 course loaded the module 4 data files and had the 239,625 events loaded as per the lab documentation.  However, in module 5 when attempting to query this data, nothing was returned.  I set the timeframe to all time and still 0 events are returned.  Any ideas?
I want to run a query on a server to display all users with their names per application. It is about finding out which users need which program most on a particular server. However, I have no idea ho... See more...
I want to run a query on a server to display all users with their names per application. It is about finding out which users need which program most on a particular server. However, I have no idea how I could write such a script. Can someone help me, please!
Hi,  I have a DNS logs with Parenthesis + numbers instead of Dots in the URL filed.  How can I replace them with a Dots?  Below are some examples from the logs.    (5)_ldap(4)_tcp(5)cmp(6)_sites... See more...
Hi,  I have a DNS logs with Parenthesis + numbers instead of Dots in the URL filed.  How can I replace them with a Dots?  Below are some examples from the logs.    (5)_ldap(4)_tcp(5)cmp(6)_sites(3)rub(3)net(2)oz(0) (4)wpad(3)rub(3)net(0) (5)_ldap(4)_tcp(2)dc(6)_msdcs(9)dc(7)core(2)t4(3)rub(3)net(0)    Thank you!