All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Environment: Splunk ES SH running in cloud (Classic experience). There are two apps for a particular sourcetype (let's call it "sourcetype-x"):  TA-customer-props (the old one) zzz-customer_pro... See more...
Environment: Splunk ES SH running in cloud (Classic experience). There are two apps for a particular sourcetype (let's call it "sourcetype-x"):  TA-customer-props (the old one) zzz-customer_props (the new one) Settings > Sourcetype > sourcetype-x > edit > Advanced > adding some new extractions and evals When I'm trying to dump all props using REST API call, I see that my settings are merged in a SA-IdentityManagement , how come? As far I know, the SA-IdentityManagement should contain lookups only. Is the any way to "de-configure" sourcetype-x from TA-customer-props and SA-IdentityManagement and leave it's configuration in zzz-customer_props only?  
 Hi, I was trying to see Browse More Apps in Splunk Enterprise and they are not showing me giving the error "Service Unavailable". We have the proxy configured in server.conf [proxyConfig] http_pro... See more...
 Hi, I was trying to see Browse More Apps in Splunk Enterprise and they are not showing me giving the error "Service Unavailable". We have the proxy configured in server.conf [proxyConfig] http_proxy=http://xxx:8080 https_proxy=http://xxx:8080 Can you help? Thanks in advance.
Hello Team, Splunkers,    I am working on a correlation search and need to use a regex expression to strip all text before a column ":". Following the suggestion presented in:  https://community.s... See more...
Hello Team, Splunkers,    I am working on a correlation search and need to use a regex expression to strip all text before a column ":". Following the suggestion presented in:  https://community.splunk.com/t5/Splunk-Search/How-to-edit-my-regex-to-remove-all-text-before-an-optional/m-p/259105   I managed to strip the text using this expression which was derived from the topic above:    | rex field=my_host "(?<my_host>[^\:]+)$"   and apply it to the following line:  Microsoft.Windows.Server.10.0.LogicalDisk:my_host.server;D  it will work and I will receive: my_host.server;D However if I apply the above expression to the same line but with column at the end of the string looking like this:  Microsoft.Windows.Server.10.0.LogicalDisk:my_host.server;D: this will not be matched. Could you please assist me with editing my expression to cover both cases and still get my_host.server;D as a result.   Regards Nikolay  
I have this lookup that has a list of searches I want to run. I want to run a search that can run output the "magic" values search results. The expected search. This is the search I am using,... See more...
I have this lookup that has a list of searches I want to run. I want to run a search that can run output the "magic" values search results. The expected search. This is the search I am using, " | inputlookup test.csv  | map search=$magic$ " When I run this this is the error I am getting: "  Unable to run query '"search index::client* sourcetype::ActiveDirectory | fields admonEventType memberOf sAMAccountName sAMAccountType | head 100 | fieldsummary maxvals=2 | where count > 0 | table field values"'. "
Hi. I'm rather fresh to Splunk and all it's magic. I was wondering if there is a place where I can find the information regarding servers responding to DMC connections. as in the monitoring consol... See more...
Hi. I'm rather fresh to Splunk and all it's magic. I was wondering if there is a place where I can find the information regarding servers responding to DMC connections. as in the monitoring console has ServerA and ServerB to monitor. how do i force ServerA to use static port X and ServerB to use static port Y when sending data to the monitoring console I'd believe it would be as simple as  [specific_stanza_name] DMC_uri:port_of_choice but i have not managed to find any info regarding the servers response to DMC queries
Hopefully I can explain this in a way where it can be understood and fingers crossed answered.  I have a search that returns the user and date. On occasion the user is blank, in which case I want to ... See more...
Hopefully I can explain this in a way where it can be understood and fingers crossed answered.  I have a search that returns the user and date. On occasion the user is blank, in which case I want to perform a search on a different index to get the appropriate value and populate the first search results. I am trying the following: | eval user=if(user=””), searchmatch(new search | table UserName), $user$) This is easy enough when the value is hard coded, but want to grab the result from the new search value. Obviously, this does not work but hopefully gives an idea what is desired. Any ideas how to accomplish?
I did a partial upgrade of one of my environments (upgraded all components except for indexers at the moment due to time constraints). And suddenly the status is showing IOWait as red. Similar to h... See more...
I did a partial upgrade of one of my environments (upgraded all components except for indexers at the moment due to time constraints). And suddenly the status is showing IOWait as red. Similar to https://community.splunk.com/t5/Splunk-Enterprise/Why-is-the-health-status-of-IOWait-red/m-p/565902#M9870 Anyone knows if it's any known issue/bug? Or shall I tell the customer to fill a case with the support? The servers in question are really doing... not much. One of the servers in question is a master node. Supposedly getting killed by IOWait whereas top shows... top - 13:12:37 up 210 days, 1:04, 1 user, load average: 0.35, 0.29, 0.28 Tasks: 255 total, 1 running, 254 sleeping, 0 stopped, 0 zombie %Cpu0 : 4.0 us, 1.3 sy, 0.3 ni, 94.4 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu1 : 0.0 us, 0.0 sy, 0.0 ni,100.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu2 : 0.0 us, 0.0 sy, 0.0 ni,100.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu3 : 0.3 us, 0.3 sy, 0.0 ni, 99.3 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu4 : 0.0 us, 0.3 sy, 0.0 ni, 99.7 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu5 : 0.0 us, 0.0 sy, 0.0 ni,100.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu6 : 0.0 us, 0.0 sy, 0.0 ni,100.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu7 : 0.0 us, 0.0 sy, 0.0 ni,100.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu8 : 0.3 us, 0.3 sy, 0.0 ni, 99.3 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu9 : 0.0 us, 0.0 sy, 0.0 ni,100.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu10 : 0.0 us, 0.0 sy, 0.0 ni,100.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu11 : 0.0 us, 0.0 sy, 0.0 ni,100.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu12 : 0.0 us, 0.0 sy, 0.0 ni,100.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu13 : 0.0 us, 0.0 sy, 0.0 ni,100.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu14 : 0.3 us, 0.3 sy, 0.0 ni, 99.3 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu15 : 0.7 us, 0.0 sy, 0.0 ni, 99.3 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st KiB Mem: 65958320 total, 4973352 used, 60984968 free, 48540 buffers KiB Swap: 4194300 total, 0 used, 4194300 free. 2479532 cached Mem Other two are search-heads. Again - top output: top - 13:13:08 up 174 days, 23:12, 1 user, load average: 5.91, 6.91, 5.82 Tasks: 456 total, 2 running, 454 sleeping, 0 stopped, 0 zombie %Cpu0 : 19.3 us, 5.0 sy, 0.0 ni, 73.7 id, 0.0 wa, 0.0 hi, 2.0 si, 0.0 st %Cpu1 : 4.4 us, 7.7 sy, 0.0 ni, 87.9 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu2 : 5.1 us, 6.8 sy, 0.0 ni, 88.2 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu3 : 5.8 us, 5.8 sy, 0.0 ni, 88.5 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu4 : 6.9 us, 3.4 sy, 0.0 ni, 89.7 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu5 : 4.6 us, 6.0 sy, 0.0 ni, 86.4 id, 0.0 wa, 0.0 hi, 3.0 si, 0.0 st %Cpu6 : 3.8 us, 3.8 sy, 0.0 ni, 92.4 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu7 : 10.6 us, 3.8 sy, 0.0 ni, 85.6 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu8 : 6.1 us, 5.8 sy, 0.0 ni, 88.1 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu9 : 4.7 us, 4.4 sy, 0.0 ni, 90.8 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu10 : 3.9 us, 4.6 sy, 0.0 ni, 88.8 id, 0.0 wa, 0.0 hi, 2.6 si, 0.0 st %Cpu11 : 4.4 us, 5.1 sy, 0.0 ni, 90.5 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu12 : 6.4 us, 5.4 sy, 0.0 ni, 87.0 id, 0.0 wa, 0.0 hi, 1.3 si, 0.0 st %Cpu13 : 9.5 us, 2.7 sy, 0.0 ni, 86.8 id, 0.0 wa, 0.0 hi, 1.0 si, 0.0 st %Cpu14 : 4.7 us, 5.4 sy, 0.0 ni, 89.9 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu15 : 9.4 us, 4.0 sy, 0.0 ni, 86.6 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu16 : 5.1 us, 5.8 sy, 0.0 ni, 89.1 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu17 : 3.8 us, 6.2 sy, 0.0 ni, 90.1 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu18 : 7.2 us, 3.9 sy, 0.0 ni, 85.2 id, 0.0 wa, 0.0 hi, 3.6 si, 0.0 st %Cpu19 : 3.1 us, 4.8 sy, 0.0 ni, 92.1 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu20 : 5.5 us, 5.9 sy, 0.0 ni, 88.6 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu21 : 7.6 us, 5.5 sy, 0.0 ni, 86.9 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu22 : 5.5 us, 5.9 sy, 0.0 ni, 88.6 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu23 : 5.7 us, 6.4 sy, 0.0 ni, 87.8 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu24 : 5.8 us, 4.8 sy, 0.0 ni, 89.4 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu25 : 4.5 us, 5.9 sy, 0.0 ni, 89.7 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu26 : 5.0 us, 7.4 sy, 0.0 ni, 87.6 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu27 : 4.7 us, 4.7 sy, 0.0 ni, 90.5 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu28 : 6.1 us, 5.1 sy, 0.0 ni, 88.9 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu29 : 5.7 us, 6.4 sy, 0.0 ni, 87.9 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu30 : 8.8 us, 5.4 sy, 0.0 ni, 85.7 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu31 : 8.9 us, 4.4 sy, 0.0 ni, 86.7 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st KiB Mem: 65938200 total, 9247920 used, 56690280 free, 15468 buffers KiB Swap: 4194300 total, 0 used, 4194300 free. 1184380 cached Mem  As you can see - the servers are mostly idling, the search heads do some work, but not much. To make things even more interesting, three other SHs dedicated to ES stressed way more than this SH-cluster, don't report IOWait problems. All I did was migrate kvstore to WiredTiger and upgraded splunk from 8.1.2 to 8.2.6. That's all.
Hi All, I want to understand if there is a way to perform an action to the server through Splunk. For e.g. to run ls -lrt command for a path to kill/terminate a process to run a script on... See more...
Hi All, I want to understand if there is a way to perform an action to the server through Splunk. For e.g. to run ls -lrt command for a path to kill/terminate a process to run a script on the server etc. Your kind help will be highly appreciated. Thank you..!!
Hi Splunk Community! I have a line chart of some values over time grouped by fieldA and so by default there is a legend that indicates the unique values from fieldA by color. May i know how i can ... See more...
Hi Splunk Community! I have a line chart of some values over time grouped by fieldA and so by default there is a legend that indicates the unique values from fieldA by color. May i know how i can just have the line chart without the legend? How should i hide the legend? Thanks in advance!
When I save a search result as Report and schedule it, next scheduled time is always set as NULL, hence my reports are never generated, please help.
Hi all, I have a Splunk App with React and JS SDK, that calls a custom REST endpoint, that calls a Python script, which uses Splunk Python SDK. Is there a way to pass the session from the logged in... See more...
Hi all, I have a Splunk App with React and JS SDK, that calls a custom REST endpoint, that calls a Python script, which uses Splunk Python SDK. Is there a way to pass the session from the logged in User using the Dashboard from JS/REST to the Python script, so that I do not need to write the authentication details for the Python SDK connection in a file? Best regards!
Hi , Thanks in Advance, My json file . how to extract fields using props and transform configuration file. { "AAA": { "modified_files": [ "a/D:\\\\splunk\\\\A / ui/.env", "a/D:\\\\splunk\\... See more...
Hi , Thanks in Advance, My json file . how to extract fields using props and transform configuration file. { "AAA": { "modified_files": [ "a/D:\\\\splunk\\\\A / ui/.env", "a/D:\\\\splunk\\\\A / ui/.env.example", "a/D:\\\\splunk\\\\B / ui/.env", "a/D:\\\\splunk\\\\B / ui/.env.example" ] } }{ "BBB": { "modified_files": [ "a/D:\\\\splunk\\\\A / ui/.env", "a/D:\\\\splunk\\\\A / ui/.env.example", "a/D:\\\\splunk\\\\B / ui/.env", "a/D:\\\\splunk\\\\B / ui/.env.example" ] } }{ "CCC": { "modified_files": [ "a/D:\\\\splunk\\\\A / ui/.env", "a/D:\\\\splunk\\\\A / ui/.env.example", "a/D:\\\\splunk\\\\B / ui/.env", "a/D:\\\\splunk\\\\B / ui/.env.example" ] } }{ "DDD": { "modified_files": [ "a/D:\\\\splunk\\\\A / ui/.env", "a/D:\\\\splunk\\\\A / ui/.env.example", "a/D:\\\\splunk\\\\B / ui/.env", "a/D:\\\\splunk\\\\B / ui/.env.example" ] } }
hi need to calculate count and percentage of fields. orginal post here, the main issue is fields contain space or balnk (2 single quotation i have spl like below,      | eventstats count ... See more...
hi need to calculate count and percentage of fields. orginal post here, the main issue is fields contain space or balnk (2 single quotation i have spl like below,      | eventstats count as namecount by name | eventstats count as colorcount by color | eventstats count as statuscount by status | eventstats count(name) as nametotal | eventstats count(color) as colortotal | eventstats count(status) as statustotal | eval name=printf("%04u %s %d", 10000-namecount, name, nametotal) | eval color=printf("%04u %s %d", 10000-colorcount, color, colortotal) | eval status=printf("%04u %s %d", 10000-statuscount, status, statustotal) | stats values(name) as name values(color) as color values(status) as status | eval cname=mvmap(name,10000-tonumber(mvindex(split(name," "),0))) | eval ccolor=mvmap(color,10000-tonumber(mvindex(split(color," "),0))) | eval cstatus=mvmap(status,10000-tonumber(mvindex(split(status," "),0))) | eval pname=mvmap(name,100*(10000-tonumber(mvindex(split(name," "),0)))/tonumber(mvindex(split(name," "),2))) | eval pcolor=mvmap(color,100*(10000-tonumber(mvindex(split(color," "),0)))/tonumber(mvindex(split(color," "),2))) | eval pstatus=mvmap(status,100*(10000-tonumber(mvindex(split(status," "),0)))/tonumber(mvindex(split(status," "),2))) | eval name=mvmap(name,mvindex(split(name," "),1)) | eval color=mvmap(color,mvindex(split(color," "),1)) | eval status=mvmap(status,mvindex(split(status," "),1)) | fields name cname pname color ccolor pcolor status cstatus pstatus     i have some "date" or "color" like this: 'Mon May 30 00:00:00 USDT 2022' or '' FYI: some of them contain space between Single quotation like this 'Mon May 30 00:00:00 USDT 2022', some of them are empty just has Single quotation like this '' not show them correcty and won't calculate percentage of them.   current output: Date cDate %Date Color cColor %Color 'Mon         2                              ''           1 'Today'    1           33.0     'red'        2         66.0   expected output: Date                                                                           cDate %Date           Color cColor %Color 'Mon May 30 00:00:00 USDT 2022'         2         66.66                ''           1            33.3 'Today'                                                                            1           33.0               'red'        2         66.0
Hi, I have an SPL, which should exclude the ip values from 4 lookups. So i tried it with a subsearch approach. But this search takes a longer time than usual to run.  How can I optimize it?   in... See more...
Hi, I have an SPL, which should exclude the ip values from 4 lookups. So i tried it with a subsearch approach. But this search takes a longer time than usual to run.  How can I optimize it?   index=a OR b action=* attack!=N/A NOT (( [| inputlookup a.csv | fields ip | rename ip as srcip]) OR ( [| inputlookup b.csv | fields ip | rename ip as src_ip]) OR ( [| inputlookup c.csv | fields ip | rename ip as src_ip]) OR ( [| inputlookup d.csv | fields ip | rename ip as src_ip])) | stats dc(attack) as dc_attack by src_ip | where dc_attack >2 | dedup src_ip     Thanks in advance!
index="SOMETHING"  earliest=-30d@d | stats earliest(_time) as action_StartTime latest(_time) as action_EndTime | eval elapsed_Time= action_EndTime - action_StartTime | convert ctime(action_StartTi... See more...
index="SOMETHING"  earliest=-30d@d | stats earliest(_time) as action_StartTime latest(_time) as action_EndTime | eval elapsed_Time= action_EndTime - action_StartTime | convert ctime(action_StartTime) ctime(action_EndTime) ctime(elapsed_Time) | fields + action_StartTime action_EndTime elapsed_Time  | sort by action_StartTime The elapsed_Time is wrong, how can i make it correct?
hello I use the cron below in order to run the search “At minute 10 past every hour from 7 through 19.”   10 7-19 * * *    Instead doing this, I would like to run the search every 15 minute... See more...
hello I use the cron below in order to run the search “At minute 10 past every hour from 7 through 19.”   10 7-19 * * *    Instead doing this, I would like to run the search every 15 minutes always between 7h and 19h  could you help please?  
I would like to know if there is a way to check when the rsync and postgress sync of data from primary to standby is completed? It is required in the below scenario. 1. Warm standby is set up and ... See more...
I would like to know if there is a way to check when the rsync and postgress sync of data from primary to standby is completed? It is required in the below scenario. 1. Warm standby is set up and is working from instance A to instance B 2. Instance A goes down and phantom failed over to instance B 3. Now all the updates are happening on instance B. Assume that it continued this way for a number of days. 4. Now instance A is backup and we configure warm standby again with instance B as primary and instance A as standby to allow the updates done on instance B in the last few days to flow back to instance A. 5. This configuration only needs to be continued until all the updates are synced back to instance A from B after which we can go back to the original configuration with instance A as primary and instance B as standby. For that we need to know how do we verify if all the updates/data are synced from primary to standby?
To start - I was suggested this solution, but despite the fact that the question is very similar the answer marked as a solution doesn't seem to actually provide the quantitative total that I am look... See more...
To start - I was suggested this solution, but despite the fact that the question is very similar the answer marked as a solution doesn't seem to actually provide the quantitative total that I am looking for. I have a series of events where there is a Start and Stop time, in epoch time. These events can be grouped by a common field, `host`, and I am trying to determine the total amount of deduplicated time that these events span. For example: Host_1, Event_1: starts at 13:00, ends at 13:15 Host_1, Event_2: starts at 13:10, ends at 13:20 Host_1, Event_3: starts at 13:30, ends at 14:00 The total time for Host_1 would therefore be 50 minutes: Event_1: 15 minutes Event_2: 5 minutes (10 minutes - 5 minutes of overlap with Event_1) Event_3: 30 minutes (no overlap with any other events) Total: 50 minutes   I had tried to leverage streamstats to get information about previous events, but couldn't work out how to get it to properly reset when the events didn't overlap. Not even sure streamstats is the best method for solving this type of problem.   EDIT: some test data may be helpful. 0,"hostname","start_time","end_time" 1,"host_1","1654130041.626307","1654130566.626307" 2,"host_1","1654131696.975800","1654133451.975800" 3,"host_1","1654132454.687189","1654134263.687189" 4,"host_1","1654132747.975800","1654133451.975800" 5,"host_1","1654133805.740912","1654134236.740912" 6,"host_1","1654136688.170093","1654136722.170093" 7,"host_1","1654136782.300892","1654136818.300892" 8,"host_1","1654136885.031861","1654137288.031861" 9,"host_1","1654137388.801936","1654139394.801936"   Doing the math, rows numbered 3 and 4 both have `start_time` values that are earlier than row 1's `end_time` value - indicating that there would be a duration overlap occurring in several rows.
Hi Guys I am looking for ways to alert when the memory usage rise or dip. Can you please kindly teach on what MLTK that I should use for this case. Thank you!
Hi There,  I am trying to generate a choropleth map of US using the following command : | iplocation final_ip |search Country = "United States" |stats count as volume by Region |rename Region ... See more...
Hi There,  I am trying to generate a choropleth map of US using the following command : | iplocation final_ip |search Country = "United States" |stats count as volume by Region |rename Region as state |dedup state |table state volume |geom geo_us_states featureIdField="state" allFeatures=True This gives a response with the fields state, volume, featureCollection, geom and but the map is still empty. Using geostats instead and then doing lookup, does give map but count aka volume is very low  . Can you help please ? @