All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi Thanks for the reply.. yes I have some index and sourcetypes but I don't know how to choose the index and sourcetypes for this ip address Thanks,
Hello, I have a search as shown below which gives me the start time (start_run), end time (end_run) and duration when the value of (ValueE) is greater than 20 for the Instrument (my_inst_226). I ... See more...
Hello, I have a search as shown below which gives me the start time (start_run), end time (end_run) and duration when the value of (ValueE) is greater than 20 for the Instrument (my_inst_226). I need to get the values (ValueE) from 11 other Instrument for the duration of my_inst_226 while ValueE is greater than 20 I would like to use "start_run" and "end_run"  to find the value of (ValueE).  I'm thinking that "start_run" and "end_run" would be variables that I can use when searching the ValueE for my 11 other Instruments but I am stuck on how I can use "start_run" and "end_run" for the next stage of my search.   index=my_index_plant sourcetype=my_sourcetype_plant Instrument="my_inst_226" | sort 0 Instrument _time | streamstats global=false window=1 current=false last(ValueE) as previous by Instrument | eval current_over=if(ValueE > 20, 1, 0) | eval previous_over=if(previous > 20, 1, 0) | eval start=if(current_over=1 and previous_over=0,1,0) | eval end=if(current_over=0 and previous_over=1,1,0) | where start=1 OR end=1 | eval start_run=if(start=1, _time, null()) | eval end_run=if(end=1, _time, null()) | filldown start_run end_run | eval run_duration=end_run-start_run | eval check=_time | where end=1 | streamstats count as run_id | eval earliest=strftime(start_run, "%F %T") | eval latest=strftime(end_run, "%F %T") | eval run_duration=tostring(run_duration, "duration") | table run_id earliest latest start_run end_run run_duration current_over previous_over end Instrument ValueE   Any and all tips, help and advice will be gratefully received.
Hi @jmrubio , did you disabled local firewall on this server? check if you disabled web interface in server.conf. Ciao. Giuseppe
Hi @DaClyde , the requested reference hardware is 12 CPUs and 12 GB RAM for both the servers if you don't have Premium App (ES or ITSI) and it depends on the number of users and scheduled searches, ... See more...
Hi @DaClyde , the requested reference hardware is 12 CPUs and 12 GB RAM for both the servers if you don't have Premium App (ES or ITSI) and it depends on the number of users and scheduled searches, this resources must be dedicated not shared. In addition the bottleneck of each Splunk infrastructure is the storage performances: Splunk requires at least 800 IOPS. You can analyze the indexing and search performances using the Monitoring Console App. Then you coult trasform eventual real time searches in scheduled searches. Ciao. Giuseppe
I am getting a 500 internal server error when I try to connect to the HF GUI. I ran firewall-cmd --list-ports, and it shows 8000/tcp. I also checked web.conf, and it shows enableSplunkWebSSL = 1, as ... See more...
I am getting a 500 internal server error when I try to connect to the HF GUI. I ran firewall-cmd --list-ports, and it shows 8000/tcp. I also checked web.conf, and it shows enableSplunkWebSSL = 1, as well as httport = 8000. What else can I check? I appreciate the help in advance!
Hi @Jana42855 , I suppose that you already have the log indexed and stored in an index with one sourcetype. At first you should define the index where the logs are stored and the sourcetype to use.... See more...
Hi @Jana42855 , I suppose that you already have the log indexed and stored in an index with one sourcetype. At first you should define the index where the logs are stored and the sourcetype to use. Then, using this index and this sourcetype, you should check if the field names are correct (field names are case sensitive) and if the fields to use in the search ( src, dest_ip, dest_port) are present in all events. then you don't need to use the search command, put all the parameters in the main search, you'll have a more performant search, then don't use index=*, because is slower than index=your_index. index=<your_index> src=**.**.***.** OR **.**.***.** dest_ip=**.***.***.*** dest_port=443  Ciao. Giuseppe
Assuming your fields are called column_1 and column_2, you could try something like this | rex field=column_1 "(?<Server>[^:]+):(?<Incident>[^:]+):(?<IncidentNumber>[^:]+):(?<Severity>.*)" | eventst... See more...
Assuming your fields are called column_1 and column_2, you could try something like this | rex field=column_1 "(?<Server>[^:]+):(?<Incident>[^:]+):(?<IncidentNumber>[^:]+):(?<Severity>.*)" | eventstats values(Severity) as AllSeverities by Server Incident IncidentNumber | eval AllSeverities=if(Severity="Clear",AllSeverities,Severity) | mvexpand AllSeverities | eval column_1=Server.":".Incident.":".IncidentNumber.":".AllSeverities | fields column_1 column_2 | dedup column_1 column_2
Hi Team, i am continously getting  below 2 errors after i did restart.  these error i am getting on indexers cluster ERROR SearchProcessRunner [531293 PreforkedSearchesManager-0] - preforked proce... See more...
Hi Team, i am continously getting  below 2 errors after i did restart.  these error i am getting on indexers cluster ERROR SearchProcessRunner [531293 PreforkedSearchesManager-0] - preforked process=0/33361 hung up WARN HttpListener [530927 HTTPDispatch] - Socket error from <search head IP address>:50094 while accessing /services/streams/search: Broken pipe   please help to resolve these error
Hello, i'm trying to use the global account variables: username > ${global_account.username} as tenantID password > ${global_account.password} as token ID to build dynamically the REST URL, but s... See more...
Hello, i'm trying to use the global account variables: username > ${global_account.username} as tenantID password > ${global_account.password} as token ID to build dynamically the REST URL, but seems that the global variables content is not filled 2023-09-13 14:51:12,726 - test_REST_API - [ERROR] - [test] HTTPError reason=HTTP Error Invalid URL '{{global_account.username}}/api/v2/entities?entitySelector=type("{{text1}}"),toRelationships.isClusterOfCai(type(KUBERNETES_CLUSTER),entityId("KUBERNETES_CLUSTER-846D9F2054A407A0"))&pageSize=4000&from=-30m&fields=toRelationships.isNamespaceOfCai,fromRelationships.isInstanceOf': No scheme supplied. Perhaps you meant http://{{global_account.username}}/api/v2/entities?entitySelector=type("{{text1}}"),toRelationships.isClusterOfCai(type(KUBERNETES_CLUSTER),entityId("KUBERNETES_CLUSTER-846D9F2054A407A0"))&pageSize=4000&from=-30m&fields=toRelationships.isNamespaceOfCai,fromRelationships.isInstanceOf? when sending request to url={{global_account.username}}/api/v2/entities?entitySelector=type("{{text1}}"),toRelationships.isClusterOfCai(type(KUBERNETES_CLUSTER),entityId("KUBERNETES_CLUSTER-846D9F2054A407A0"))&pageSize=4000&from=-30m&fields=toRelationships.isNamespaceOfCai,fromRelationships.isInstanceOf method=GET Traceback (most recent call last): The ${global_account.username} has been tested with and without the prefix https:// Please anyone can help me ?  
Hello, So I am trying to build a report that alerts us when a support ticket is about to hit 24hrs, The filed we are using is custom time field called REPORTED_DATE and it displays the time in the ... See more...
Hello, So I am trying to build a report that alerts us when a support ticket is about to hit 24hrs, The filed we are using is custom time field called REPORTED_DATE and it displays the time in the way  2023-09-11 08:44:03.0 I need a report That tells us when tickets are within 12hrs or less of crossing the 24 hour mark.    This is our code so far    ((index="wss_desktop_os") (sourcetype="support_remedy")) earliest=-1d@d | search ASSIGNED_GROUP="DESKTOP_SUPPORT" AND STATUS_TXT IN ("ASSIGNED", "IN PROGRESS", "PENDING") | eval TEST = REPORTED_DATE | eval REPORTED_DATE2=strptime(TEST, "%Y-%m-%d") | eval MTTRSET = round((now() - REPORTED_DATE2) /3600) ```| eval MTTR = strptime(MTTRSET, "%Hh, %M")``` | dedup ENTRY_ID | stats LAST(REPORTED_DATE) AS Reported, values(ASSIGNEE) AS Assignee, values(STATUS_TXT) as Status,values(MTTRSET) as MTTR by ENTRY_ID   Any help would be appreciated. I will admit I struggle with time calucations
It worked perfectly for me. Thank you again.
Hi etoombs, Many thanks for the suggestion, I got that sorted.ta
Our Splunk environment is chronically under resourced, so we see a lot of this message: [umechujf,umechujs] Configuration initialization for D:\Splunk\etc took longer than expected (10797ms) when d... See more...
Our Splunk environment is chronically under resourced, so we see a lot of this message: [umechujf,umechujs] Configuration initialization for D:\Splunk\etc took longer than expected (10797ms) when dispatching a search with search ID _MTI4NDg3MjQ0MDExNzAwNUBtaWw_MTI4NDg3MjQ0MDExNzAwNUBtaWw__t2monitor__ErrorCount_1694617402.9293. This usually indicates problems with underlying storage performance. It is our understanding that the core issue here is not so much storage, but processor availability.  Basically Splunk had to wait 10.7 seconds for the specified pool of processors to be available before it could run the search.  We are running a single SH and single IDX.  Both are configured for 10 CPU cores.  Also, this is a VM environment, so those are shared resources.  I know, basically all of the things Splunk advises against (did I mention also running Windows?).  No, we can't address the overall resource situation right now. Somewhere the idea came up that reducing the quantity of cores might help improve processor availability, so if Splunk were only waiting for 4 or 8 cores, it would at least get to the point of beginning the search with less initial delay as it would have to wait for a smaller pool of cores to be available first. So our question is, which server is most responsible for the delay, the SH or the IDX?  Which would be the better candidate for reducing the number of available cores?  
@Sparky1 were you able to find out the solution to ingest Sophos audit logs ?
Hi All, i didn't get the result by using this below  query search.  how to check and confirm the index and source type specifically to precise the query index=*| search src=**.**.***.** OR **.... See more...
Hi All, i didn't get the result by using this below  query search.  how to check and confirm the index and source type specifically to precise the query index=*| search src=**.**.***.** OR **.**.***.** dest_ip=**.***.***.*** dest_port=443 How to confirm the source type and index
Hi, I did it you can use the return_to URL parameter. Here's an ex: URL..   https://HOSTNAME:8000/en-GB/app/search/search?return_to=/en-GB/app/MYAPP/search   When the user clicks on this URL, ... See more...
Hi, I did it you can use the return_to URL parameter. Here's an ex: URL..   https://HOSTNAME:8000/en-GB/app/search/search?return_to=/en-GB/app/MYAPP/search   When the user clicks on this URL, they will first be taken to the search app, and then they will be redirected to your preferred app MYAPP/search. Manually test the redirection, Use browser developer tools to inspect network requests and redirects or use any online tool like https://redirectchecker.com/  This can help you to get detail redirection report might help pinpoint the issue.let me know still you have issue. Ref: doc
Hi @gcusello  Thanks for your reply. Instead of having the outputs as 2 columns, I need to have two rows generated  For example an serverA has generated an incident  that is a warning (say disk sp... See more...
Hi @gcusello  Thanks for your reply. Instead of having the outputs as 2 columns, I need to have two rows generated  For example an serverA has generated an incident  that is a warning (say disk space) serverA:zabbix:123456:Warning Warning the tool picks up the event and generates a ticket.  Lets say nobody has done anything with it. That disk has now reached critical and escalates the incident. Splunk picks up the event serverA:zabbix:123456:Critical Critical  because column1 is unique, the tool picks up the event and calls out the team.  The team then clear the space. Splunk picks up the event as: serverA:zabbix:123456:Clear Clear However, that is not match the column1 above. What I need is that when a clear is generated, Splunk generates 2 "fake" records that would look as follows: serverA:zabbix:123456:Warning Clear serverA:zabbix:123456:Critical Clear So that Column1 matches the initial columns above and the tool will pick up 2 events and clear both records that were generated. Thanks, David
In your stats statement, add the other fields you need using evals: count(eval(status="Success")) as Success, count(eval(status="Failed")) as Failed, and remove the status from the by clause. After t... See more...
In your stats statement, add the other fields you need using evals: count(eval(status="Success")) as Success, count(eval(status="Failed")) as Failed, and remove the status from the by clause. After the stats, do an eval to calculate your percentages. 
Hi @David_B, you have to divide fields from column1 using a regex, something like this: <your_search> | rex field=column1 "^([^:]*:){3}(?<severity>\w*)" | eval column2=if(column2="clear",severity,c... See more...
Hi @David_B, you have to divide fields from column1 using a regex, something like this: <your_search> | rex field=column1 "^([^:]*:){3}(?<severity>\w*)" | eval column2=if(column2="clear",severity,column2 | table column1 column2 you can test the regex at https://regex101.com/r/KUTS3I/1 Ciao. Giuseppe
I am trying to restrict access to a kv store lookup in Splunk. when I set the read/write permissions only for users assigned to test_role role, it should not be accessible by any user outside that r... See more...
I am trying to restrict access to a kv store lookup in Splunk. when I set the read/write permissions only for users assigned to test_role role, it should not be accessible by any user outside that role but it isn't working as expected with a kv store lookup. Can anybody suggest how to achieve this?