All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I am getting a 500 internal server error when I try to connect to the HF GUI. I ran firewall-cmd --list-ports, and it shows 8000/tcp. I also checked web.conf, and it shows enableSplunkWebSSL = 1, as ... See more...
I am getting a 500 internal server error when I try to connect to the HF GUI. I ran firewall-cmd --list-ports, and it shows 8000/tcp. I also checked web.conf, and it shows enableSplunkWebSSL = 1, as well as httport = 8000. What else can I check? I appreciate the help in advance!
Hi @Jana42855 , I suppose that you already have the log indexed and stored in an index with one sourcetype. At first you should define the index where the logs are stored and the sourcetype to use.... See more...
Hi @Jana42855 , I suppose that you already have the log indexed and stored in an index with one sourcetype. At first you should define the index where the logs are stored and the sourcetype to use. Then, using this index and this sourcetype, you should check if the field names are correct (field names are case sensitive) and if the fields to use in the search ( src, dest_ip, dest_port) are present in all events. then you don't need to use the search command, put all the parameters in the main search, you'll have a more performant search, then don't use index=*, because is slower than index=your_index. index=<your_index> src=**.**.***.** OR **.**.***.** dest_ip=**.***.***.*** dest_port=443  Ciao. Giuseppe
Assuming your fields are called column_1 and column_2, you could try something like this | rex field=column_1 "(?<Server>[^:]+):(?<Incident>[^:]+):(?<IncidentNumber>[^:]+):(?<Severity>.*)" | eventst... See more...
Assuming your fields are called column_1 and column_2, you could try something like this | rex field=column_1 "(?<Server>[^:]+):(?<Incident>[^:]+):(?<IncidentNumber>[^:]+):(?<Severity>.*)" | eventstats values(Severity) as AllSeverities by Server Incident IncidentNumber | eval AllSeverities=if(Severity="Clear",AllSeverities,Severity) | mvexpand AllSeverities | eval column_1=Server.":".Incident.":".IncidentNumber.":".AllSeverities | fields column_1 column_2 | dedup column_1 column_2
Hi Team, i am continously getting  below 2 errors after i did restart.  these error i am getting on indexers cluster ERROR SearchProcessRunner [531293 PreforkedSearchesManager-0] - preforked proce... See more...
Hi Team, i am continously getting  below 2 errors after i did restart.  these error i am getting on indexers cluster ERROR SearchProcessRunner [531293 PreforkedSearchesManager-0] - preforked process=0/33361 hung up WARN HttpListener [530927 HTTPDispatch] - Socket error from <search head IP address>:50094 while accessing /services/streams/search: Broken pipe   please help to resolve these error
Hello, i'm trying to use the global account variables: username > ${global_account.username} as tenantID password > ${global_account.password} as token ID to build dynamically the REST URL, but s... See more...
Hello, i'm trying to use the global account variables: username > ${global_account.username} as tenantID password > ${global_account.password} as token ID to build dynamically the REST URL, but seems that the global variables content is not filled 2023-09-13 14:51:12,726 - test_REST_API - [ERROR] - [test] HTTPError reason=HTTP Error Invalid URL '{{global_account.username}}/api/v2/entities?entitySelector=type("{{text1}}"),toRelationships.isClusterOfCai(type(KUBERNETES_CLUSTER),entityId("KUBERNETES_CLUSTER-846D9F2054A407A0"))&pageSize=4000&from=-30m&fields=toRelationships.isNamespaceOfCai,fromRelationships.isInstanceOf': No scheme supplied. Perhaps you meant http://{{global_account.username}}/api/v2/entities?entitySelector=type("{{text1}}"),toRelationships.isClusterOfCai(type(KUBERNETES_CLUSTER),entityId("KUBERNETES_CLUSTER-846D9F2054A407A0"))&pageSize=4000&from=-30m&fields=toRelationships.isNamespaceOfCai,fromRelationships.isInstanceOf? when sending request to url={{global_account.username}}/api/v2/entities?entitySelector=type("{{text1}}"),toRelationships.isClusterOfCai(type(KUBERNETES_CLUSTER),entityId("KUBERNETES_CLUSTER-846D9F2054A407A0"))&pageSize=4000&from=-30m&fields=toRelationships.isNamespaceOfCai,fromRelationships.isInstanceOf method=GET Traceback (most recent call last): The ${global_account.username} has been tested with and without the prefix https:// Please anyone can help me ?  
Hello, So I am trying to build a report that alerts us when a support ticket is about to hit 24hrs, The filed we are using is custom time field called REPORTED_DATE and it displays the time in the ... See more...
Hello, So I am trying to build a report that alerts us when a support ticket is about to hit 24hrs, The filed we are using is custom time field called REPORTED_DATE and it displays the time in the way  2023-09-11 08:44:03.0 I need a report That tells us when tickets are within 12hrs or less of crossing the 24 hour mark.    This is our code so far    ((index="wss_desktop_os") (sourcetype="support_remedy")) earliest=-1d@d | search ASSIGNED_GROUP="DESKTOP_SUPPORT" AND STATUS_TXT IN ("ASSIGNED", "IN PROGRESS", "PENDING") | eval TEST = REPORTED_DATE | eval REPORTED_DATE2=strptime(TEST, "%Y-%m-%d") | eval MTTRSET = round((now() - REPORTED_DATE2) /3600) ```| eval MTTR = strptime(MTTRSET, "%Hh, %M")``` | dedup ENTRY_ID | stats LAST(REPORTED_DATE) AS Reported, values(ASSIGNEE) AS Assignee, values(STATUS_TXT) as Status,values(MTTRSET) as MTTR by ENTRY_ID   Any help would be appreciated. I will admit I struggle with time calucations
It worked perfectly for me. Thank you again.
Hi etoombs, Many thanks for the suggestion, I got that sorted.ta
Our Splunk environment is chronically under resourced, so we see a lot of this message: [umechujf,umechujs] Configuration initialization for D:\Splunk\etc took longer than expected (10797ms) when d... See more...
Our Splunk environment is chronically under resourced, so we see a lot of this message: [umechujf,umechujs] Configuration initialization for D:\Splunk\etc took longer than expected (10797ms) when dispatching a search with search ID _MTI4NDg3MjQ0MDExNzAwNUBtaWw_MTI4NDg3MjQ0MDExNzAwNUBtaWw__t2monitor__ErrorCount_1694617402.9293. This usually indicates problems with underlying storage performance. It is our understanding that the core issue here is not so much storage, but processor availability.  Basically Splunk had to wait 10.7 seconds for the specified pool of processors to be available before it could run the search.  We are running a single SH and single IDX.  Both are configured for 10 CPU cores.  Also, this is a VM environment, so those are shared resources.  I know, basically all of the things Splunk advises against (did I mention also running Windows?).  No, we can't address the overall resource situation right now. Somewhere the idea came up that reducing the quantity of cores might help improve processor availability, so if Splunk were only waiting for 4 or 8 cores, it would at least get to the point of beginning the search with less initial delay as it would have to wait for a smaller pool of cores to be available first. So our question is, which server is most responsible for the delay, the SH or the IDX?  Which would be the better candidate for reducing the number of available cores?  
@Sparky1 were you able to find out the solution to ingest Sophos audit logs ?
Hi All, i didn't get the result by using this below  query search.  how to check and confirm the index and source type specifically to precise the query index=*| search src=**.**.***.** OR **.... See more...
Hi All, i didn't get the result by using this below  query search.  how to check and confirm the index and source type specifically to precise the query index=*| search src=**.**.***.** OR **.**.***.** dest_ip=**.***.***.*** dest_port=443 How to confirm the source type and index
Hi, I did it you can use the return_to URL parameter. Here's an ex: URL..   https://HOSTNAME:8000/en-GB/app/search/search?return_to=/en-GB/app/MYAPP/search   When the user clicks on this URL, ... See more...
Hi, I did it you can use the return_to URL parameter. Here's an ex: URL..   https://HOSTNAME:8000/en-GB/app/search/search?return_to=/en-GB/app/MYAPP/search   When the user clicks on this URL, they will first be taken to the search app, and then they will be redirected to your preferred app MYAPP/search. Manually test the redirection, Use browser developer tools to inspect network requests and redirects or use any online tool like https://redirectchecker.com/  This can help you to get detail redirection report might help pinpoint the issue.let me know still you have issue. Ref: doc
Hi @gcusello  Thanks for your reply. Instead of having the outputs as 2 columns, I need to have two rows generated  For example an serverA has generated an incident  that is a warning (say disk sp... See more...
Hi @gcusello  Thanks for your reply. Instead of having the outputs as 2 columns, I need to have two rows generated  For example an serverA has generated an incident  that is a warning (say disk space) serverA:zabbix:123456:Warning Warning the tool picks up the event and generates a ticket.  Lets say nobody has done anything with it. That disk has now reached critical and escalates the incident. Splunk picks up the event serverA:zabbix:123456:Critical Critical  because column1 is unique, the tool picks up the event and calls out the team.  The team then clear the space. Splunk picks up the event as: serverA:zabbix:123456:Clear Clear However, that is not match the column1 above. What I need is that when a clear is generated, Splunk generates 2 "fake" records that would look as follows: serverA:zabbix:123456:Warning Clear serverA:zabbix:123456:Critical Clear So that Column1 matches the initial columns above and the tool will pick up 2 events and clear both records that were generated. Thanks, David
In your stats statement, add the other fields you need using evals: count(eval(status="Success")) as Success, count(eval(status="Failed")) as Failed, and remove the status from the by clause. After t... See more...
In your stats statement, add the other fields you need using evals: count(eval(status="Success")) as Success, count(eval(status="Failed")) as Failed, and remove the status from the by clause. After the stats, do an eval to calculate your percentages. 
Hi @David_B, you have to divide fields from column1 using a regex, something like this: <your_search> | rex field=column1 "^([^:]*:){3}(?<severity>\w*)" | eval column2=if(column2="clear",severity,c... See more...
Hi @David_B, you have to divide fields from column1 using a regex, something like this: <your_search> | rex field=column1 "^([^:]*:){3}(?<severity>\w*)" | eval column2=if(column2="clear",severity,column2 | table column1 column2 you can test the regex at https://regex101.com/r/KUTS3I/1 Ciao. Giuseppe
I am trying to restrict access to a kv store lookup in Splunk. when I set the read/write permissions only for users assigned to test_role role, it should not be accessible by any user outside that r... See more...
I am trying to restrict access to a kv store lookup in Splunk. when I set the read/write permissions only for users assigned to test_role role, it should not be accessible by any user outside that role but it isn't working as expected with a kv store lookup. Can anybody suggest how to achieve this?
Hi, I want to create a splunk table using multiple fields. Let me explain the scenario I have the following fields Name Role (multiple roles will exist for each name) HTTPrequest (There are mul... See more...
Hi, I want to create a splunk table using multiple fields. Let me explain the scenario I have the following fields Name Role (multiple roles will exist for each name) HTTPrequest (There are multiple response as 2**,3**,4** and 5**) My final output  should be when the query is ran, It should the group the data in the below format for every day Date Name Role Success Failed  Total Failed % 01-Jan-23 Rambo Team lead 100 0 100 0 01-Jan-23 Rambo Manager 100 10 110 10 01-Jan-23 King operator 2000 100 2100 5 02-Jan-23 King Manager 100 0 100 0 03-Jan-23 cheesy Manager 100 10 110 10 04-Jan-23 cheesy Team lead 4000 600 4600 15     So, What I tried is  index=ABCD | bucket _time span=1d | eval status=case(HTTPrequest < 400,"Success",HTTPrequest > 399,"Failed" ) | stats count by _time Name Role status This works something as below but I need the success and failure  in to 2 seperate columns as I have shown above and also I need to add the failed % and total Date Name Role HTTPStatus COUNT 01-Jan-23 Rambo Team lead Success 100 01-Jan-23 Rambo Team lead Failed 0 01-Jan-23 Rambo Manager Success 100 01-Jan-23 Rambo Manager Failed 10 01-Jan-23 King operator Success 2000 01-Jan-23 King operator Failed 200 02-Jan-23 King Manager Success 10 03-Jan-23 cheesy Manager Success 300 04-Jan-23 cheesy Team lead Success 400   I used the chart count over X by Y but this allows me to use only 2 fields and not more than 2 Please could you suggest me on how to get this sorted. 
Hello,  I have a couple splunk columns that looks as follows: server:incident:incident#:severity severity   this object is then fed to another system which separates and generat... See more...
Hello,  I have a couple splunk columns that looks as follows: server:incident:incident#:severity severity   this object is then fed to another system which separates and generates incidents. Server: hostname incident: category of incident incident#: the incident number sererity: Critical/Warning/Clear Example: serverA:zabbix:123456:Warning Warning serverA:zabbix:123456:Critical Critical    The objective is that it generates uniqueness of the incident (if warning, then create a ticket, if Critical then call out) All works well when with the separate of Critical and Warning alerts, however when one clear is generated, I need to generate two records to look as follows: serverA:zabbix:123456:Warning Clear serverA:zabbix:123456:Critical Clear    This way, the object that has been sent will get the clear. Is there a way to achieve this? Thanks David
Please help with answers .
@bowesmana  thanks for your inputs. source 2 events are not tied to the physical clock and a single day in application could span multiple days in calendar or multiple days in application can fit in... See more...
@bowesmana  thanks for your inputs. source 2 events are not tied to the physical clock and a single day in application could span multiple days in calendar or multiple days in application can fit in a single calendar day time frame. i'm exploring the option of populating these two sources separately in dashboard and try to pass the source 1 date/time as inputs to source 2 and get the events by each logical date.