All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Here is my kv store lookup  name rating comment experience  subject A 3 good 4 math B 4 very good 7 science   now i want to append new row like t... See more...
Here is my kv store lookup  name rating comment experience  subject A 3 good 4 math B 4 very good 7 science   now i want to append new row like this with different rating name rating comment experience  subject A 3 good 4 math B 4 very good 7 science A 5 Excellent  4 math   i am trying to use      | inputlookup table_a |search name="A" |eval rating=5 ,comment="Execellent" key=_key| outputlookup append=true key_field=key table_a     But this is not working..Please someone help me with this..   Thanks
Hi, Is there anyway to get the p(95) of URI1, URI2, URI3 if the p(95) of URI4 is greater than 2sec. I tried the below query, but it’s giving the p(95) on only those URIs whose p(95)>2. I’m expecti... See more...
Hi, Is there anyway to get the p(95) of URI1, URI2, URI3 if the p(95) of URI4 is greater than 2sec. I tried the below query, but it’s giving the p(95) on only those URIs whose p(95)>2. I’m expecting p(95) of all URI1, URI2, URI3 if the condition is sarisfied index=myindex URI in (URI1, URI2, URI3, URI4) | stats perc95(responsetime) as p95 by URI | where p95>2
Hello, How to outputlookup csv with permission?   ***Note that I am not Splunk admin - I only have access to Splunk GUI***   Please help. Thank you so much For example:  | outputlookup test.cs... See more...
Hello, How to outputlookup csv with permission?   ***Note that I am not Splunk admin - I only have access to Splunk GUI***   Please help. Thank you so much For example:  | outputlookup test.csv It will create test.csv in the following directory with no owner and sharing:Global  I am able to delete it, but I could not modify the permission. How to outputlookup csv and set to sharing:App and I am the owner? /opt/splunk/etc/apps/testapp/lookups/test.csv    Owner: No owner    App: testapp   Sharing: Global    Status Enabled
I need to find abnormalities in my data. The data I have is individual views for certain movie titles. I need to find content that was abnormally popular over some small time interval, say 1 hour. An... See more...
I need to find abnormalities in my data. The data I have is individual views for certain movie titles. I need to find content that was abnormally popular over some small time interval, say 1 hour. And check a few weeks worth of data. One option is to run a query manually for each hour       ``` Run this over 60m time window ``` index=mydata | top limit=100 movieId       Obviously I don't want to run this query 24 * 7 = 168 times for one weeks worth of data. How can I bin the data into time buckets, and get a percentage ratio by movieId? This is what I came up with:       ``` Run this over 1+ week ``` index=mydata | bin span=60m _time | top limit=100 movieId, _time       This does not help me because the output of `top` is showing me a percentage based on the entire input set of data. I need a "local" percentage, i.e. a percentage based on only that slice of data in the bin.   I'm wondering if eventstats or streamstats can be useful here but I was not able to come up with a query using those commands
Hi  Hi, If i select "MM" ,How do i get entity associated(ie materia;,supplied material) to that particular domain...My second drop has static values which then fetch the results for 4 different q... See more...
Hi  Hi, If i select "MM" ,How do i get entity associated(ie materia;,supplied material) to that particular domain...My second drop has static values which then fetch the results for 4 different queries parallely when i select from data entity.then later how do i perform multiselect option for (material and supplied material)
Hello Splunk community, I'm in the process of installing Splunk for the first time on a Windows server. I've followed the official installation guide, but I've encountered an issue during the instal... See more...
Hello Splunk community, I'm in the process of installing Splunk for the first time on a Windows server. I've followed the official installation guide, but I've encountered an issue during the installation process. After running the installer, I received an error message that says 'Error 123: The filename, directory name, or volume label syntax is incorrect.' I've double-checked the installation path and made sure there are no special characters, but I still can't seem to get past this error. Has anyone else experienced this issue during installation? What steps can I take to resolve it and successfully install Splunk on my Windows server? Any help would be greatly appreciated. Thank you!
For anyone using Hurricane Labs "Broken hosts" app (https://splunkbase.splunk.com/app/3247) note that the latest version, 4.2.2, appears to have a very minor but breaking bug. The file /default/saved... See more...
For anyone using Hurricane Labs "Broken hosts" app (https://splunkbase.splunk.com/app/3247) note that the latest version, 4.2.2, appears to have a very minor but breaking bug. The file /default/savedsearches.conf has a stanza for the "Broken Hosts Alert - by contact" alert. Depending how you use the app, that potentially drives your entire alerting mechanism. Two lines in that file (121 & 130) wrap a built-in search macro in double quotes where they should not exist:   | fillnull value="`default_expected_time`" lateSecs   should be:   | fillnull value=`default_expected_time` lateSecs   The result of this is to assign the string value "`default_expected_time`" to the lateSecs variable, rather than expanding to whatever default integer you configured in the macro. Removing those double quotes from both lines seems to fix the issue. I've also raised an issue on the Hurricane Labs github page below...though activity there is pretty stale and I'm not sure if anyone is looking there... https://github.com/HurricaneLabs/brokenhosts/issues/3
Hi All, Is there any way to enable and disable the Splunk alerts automatically based on the  logs source. e.g. We have Site1 and Site 2 is active-passive setup.   case1:- Site 1 is active and S... See more...
Hi All, Is there any way to enable and disable the Splunk alerts automatically based on the  logs source. e.g. We have Site1 and Site 2 is active-passive setup.   case1:- Site 1 is active and Site 2 is passive all Site 1 alerts should get enabled automatically. we can search for Site1 host as condition to enable alerts. Case 2 :- Site 2 is active and Site 1 is passive all Site 2 alerts should get enabled automatically. we can search for Site2 host as condition to enable alerts.    
Splunk queries not returning anything in table. I see events matching for these queries but nothing under 'Statistics' section. 1. index=address-validation RESP_MARKER | rex field=log "\"operatio... See more...
Splunk queries not returning anything in table. I see events matching for these queries but nothing under 'Statistics' section. 1. index=address-validation RESP_MARKER | rex field=log "\"operationPath\"\:\"(?<path>\w+).*\"operationType\"\:\"(?<type>\w+).*\"region\"\:\"(?<reg>\w+).*" | table path, type, reg 2.  index=club-finder RESP_MARKER | rex field=log "\"operationPath\"\:\"\/(?<path>\w+).*\"operationType\"\:\"(?<type>\w+).*\"region\"\:\"(?<reg>\w+).*\"totalTime\"\:(?<timeTaken>\w+)" | table type, path, timeTaken, reg
Hello Experts, I am looking at an alert that is using a join function to match a work_center with a work order. I am wondering what records in a stream of records the join is looking at to get that ... See more...
Hello Experts, I am looking at an alert that is using a join function to match a work_center with a work order. I am wondering what records in a stream of records the join is looking at to get that result? Is there a way to get the latest result.  To explain further, the work center in some cases will change based on where work is being completed, so I would like to grab the latest result when the alert runs.  The current code I am looking at using this give us a way to compare the work center in the source="punch" vs the current stream of data. I am wondering if I can further manipulate that subsearch to look at the last result in source="punch". I tried a couple things but didn't have any luck. Not super familiar with joins in my normal work.  | join cwo type left [search source=punch | rename work_center as position]
Hi All, I have a many index and sourcetypes but i don't know which one i have to use to search for specific ip address traffic with port.. please guide me like how can i identify and use the  e... See more...
Hi All, I have a many index and sourcetypes but i don't know which one i have to use to search for specific ip address traffic with port.. please guide me like how can i identify and use the  existing index and sourcetypes to  analyze  particular traffic.....
We have standalone environment and are getting error "the percentage of non-high priority searches skipped (61%) over the last 24 hours is very high and exceeded the red threshold (20%) on this splun... See more...
We have standalone environment and are getting error "the percentage of non-high priority searches skipped (61%) over the last 24 hours is very high and exceeded the red threshold (20%) on this splunk instance."  The environment: Customer has standalone where we created an app with a savedsearch script that pulls all indexed events every 1 hour and bundles them into a  .json file, customer then compresses it into a .gz file for transfer into our production environment.   What we are seeing is this skipped searches message and when we check the specific job, we see that every time it runs there are 2 things that come up as jobs, the export app started by python calling the script and then the actual search job activity with our SPL search, both jobs are 1 second apart and stays in the jobs page for 10 minutes each, customer states that it takes ~2.5 minutes for this job to complete.   The python script seems to stay longer for some reason, even after its job  Not sure how to proceed, since we had it scheduled every 4 hours and it was doing the same thing, so we lowered it to 1 hour, no difference. Our search looks at the last completed .json file epoch time and current epoch time to grab those events in that range, so not sure if that message is like a false positive by the way we are catching events (timestamps).  How can i remove the skipped searches error message.  Tips??      
Hello, I have a search as shown below which gives me the start time (start_run), end time (end_run) and duration when the value of (ValueE) is greater than 20 for the Instrument (my_inst_226). I ... See more...
Hello, I have a search as shown below which gives me the start time (start_run), end time (end_run) and duration when the value of (ValueE) is greater than 20 for the Instrument (my_inst_226). I need to get the values (ValueE) from 11 other Instrument for the duration of my_inst_226 while ValueE is greater than 20 I would like to use "start_run" and "end_run"  to find the value of (ValueE).  I'm thinking that "start_run" and "end_run" would be variables that I can use when searching the ValueE for my 11 other Instruments but I am stuck on how I can use "start_run" and "end_run" for the next stage of my search.   index=my_index_plant sourcetype=my_sourcetype_plant Instrument="my_inst_226" | sort 0 Instrument _time | streamstats global=false window=1 current=false last(ValueE) as previous by Instrument | eval current_over=if(ValueE > 20, 1, 0) | eval previous_over=if(previous > 20, 1, 0) | eval start=if(current_over=1 and previous_over=0,1,0) | eval end=if(current_over=0 and previous_over=1,1,0) | where start=1 OR end=1 | eval start_run=if(start=1, _time, null()) | eval end_run=if(end=1, _time, null()) | filldown start_run end_run | eval run_duration=end_run-start_run | eval check=_time | where end=1 | streamstats count as run_id | eval earliest=strftime(start_run, "%F %T") | eval latest=strftime(end_run, "%F %T") | eval run_duration=tostring(run_duration, "duration") | table run_id earliest latest start_run end_run run_duration current_over previous_over end Instrument ValueE   Any and all tips, help and advice will be gratefully received.
I am getting a 500 internal server error when I try to connect to the HF GUI. I ran firewall-cmd --list-ports, and it shows 8000/tcp. I also checked web.conf, and it shows enableSplunkWebSSL = 1, as ... See more...
I am getting a 500 internal server error when I try to connect to the HF GUI. I ran firewall-cmd --list-ports, and it shows 8000/tcp. I also checked web.conf, and it shows enableSplunkWebSSL = 1, as well as httport = 8000. What else can I check? I appreciate the help in advance!
Hi Team, i am continously getting  below 2 errors after i did restart.  these error i am getting on indexers cluster ERROR SearchProcessRunner [531293 PreforkedSearchesManager-0] - preforked proce... See more...
Hi Team, i am continously getting  below 2 errors after i did restart.  these error i am getting on indexers cluster ERROR SearchProcessRunner [531293 PreforkedSearchesManager-0] - preforked process=0/33361 hung up WARN HttpListener [530927 HTTPDispatch] - Socket error from <search head IP address>:50094 while accessing /services/streams/search: Broken pipe   please help to resolve these error
Hello, i'm trying to use the global account variables: username > ${global_account.username} as tenantID password > ${global_account.password} as token ID to build dynamically the REST URL, but s... See more...
Hello, i'm trying to use the global account variables: username > ${global_account.username} as tenantID password > ${global_account.password} as token ID to build dynamically the REST URL, but seems that the global variables content is not filled 2023-09-13 14:51:12,726 - test_REST_API - [ERROR] - [test] HTTPError reason=HTTP Error Invalid URL '{{global_account.username}}/api/v2/entities?entitySelector=type("{{text1}}"),toRelationships.isClusterOfCai(type(KUBERNETES_CLUSTER),entityId("KUBERNETES_CLUSTER-846D9F2054A407A0"))&pageSize=4000&from=-30m&fields=toRelationships.isNamespaceOfCai,fromRelationships.isInstanceOf': No scheme supplied. Perhaps you meant http://{{global_account.username}}/api/v2/entities?entitySelector=type("{{text1}}"),toRelationships.isClusterOfCai(type(KUBERNETES_CLUSTER),entityId("KUBERNETES_CLUSTER-846D9F2054A407A0"))&pageSize=4000&from=-30m&fields=toRelationships.isNamespaceOfCai,fromRelationships.isInstanceOf? when sending request to url={{global_account.username}}/api/v2/entities?entitySelector=type("{{text1}}"),toRelationships.isClusterOfCai(type(KUBERNETES_CLUSTER),entityId("KUBERNETES_CLUSTER-846D9F2054A407A0"))&pageSize=4000&from=-30m&fields=toRelationships.isNamespaceOfCai,fromRelationships.isInstanceOf method=GET Traceback (most recent call last): The ${global_account.username} has been tested with and without the prefix https:// Please anyone can help me ?  
Hello, So I am trying to build a report that alerts us when a support ticket is about to hit 24hrs, The filed we are using is custom time field called REPORTED_DATE and it displays the time in the ... See more...
Hello, So I am trying to build a report that alerts us when a support ticket is about to hit 24hrs, The filed we are using is custom time field called REPORTED_DATE and it displays the time in the way  2023-09-11 08:44:03.0 I need a report That tells us when tickets are within 12hrs or less of crossing the 24 hour mark.    This is our code so far    ((index="wss_desktop_os") (sourcetype="support_remedy")) earliest=-1d@d | search ASSIGNED_GROUP="DESKTOP_SUPPORT" AND STATUS_TXT IN ("ASSIGNED", "IN PROGRESS", "PENDING") | eval TEST = REPORTED_DATE | eval REPORTED_DATE2=strptime(TEST, "%Y-%m-%d") | eval MTTRSET = round((now() - REPORTED_DATE2) /3600) ```| eval MTTR = strptime(MTTRSET, "%Hh, %M")``` | dedup ENTRY_ID | stats LAST(REPORTED_DATE) AS Reported, values(ASSIGNEE) AS Assignee, values(STATUS_TXT) as Status,values(MTTRSET) as MTTR by ENTRY_ID   Any help would be appreciated. I will admit I struggle with time calucations
Our Splunk environment is chronically under resourced, so we see a lot of this message: [umechujf,umechujs] Configuration initialization for D:\Splunk\etc took longer than expected (10797ms) when d... See more...
Our Splunk environment is chronically under resourced, so we see a lot of this message: [umechujf,umechujs] Configuration initialization for D:\Splunk\etc took longer than expected (10797ms) when dispatching a search with search ID _MTI4NDg3MjQ0MDExNzAwNUBtaWw_MTI4NDg3MjQ0MDExNzAwNUBtaWw__t2monitor__ErrorCount_1694617402.9293. This usually indicates problems with underlying storage performance. It is our understanding that the core issue here is not so much storage, but processor availability.  Basically Splunk had to wait 10.7 seconds for the specified pool of processors to be available before it could run the search.  We are running a single SH and single IDX.  Both are configured for 10 CPU cores.  Also, this is a VM environment, so those are shared resources.  I know, basically all of the things Splunk advises against (did I mention also running Windows?).  No, we can't address the overall resource situation right now. Somewhere the idea came up that reducing the quantity of cores might help improve processor availability, so if Splunk were only waiting for 4 or 8 cores, it would at least get to the point of beginning the search with less initial delay as it would have to wait for a smaller pool of cores to be available first. So our question is, which server is most responsible for the delay, the SH or the IDX?  Which would be the better candidate for reducing the number of available cores?  
Hi All, i didn't get the result by using this below  query search.  how to check and confirm the index and source type specifically to precise the query index=*| search src=**.**.***.** OR **.... See more...
Hi All, i didn't get the result by using this below  query search.  how to check and confirm the index and source type specifically to precise the query index=*| search src=**.**.***.** OR **.**.***.** dest_ip=**.***.***.*** dest_port=443 How to confirm the source type and index
I am trying to restrict access to a kv store lookup in Splunk. when I set the read/write permissions only for users assigned to test_role role, it should not be accessible by any user outside that r... See more...
I am trying to restrict access to a kv store lookup in Splunk. when I set the read/write permissions only for users assigned to test_role role, it should not be accessible by any user outside that role but it isn't working as expected with a kv store lookup. Can anybody suggest how to achieve this?