All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi I want to extract the highlighted part RAISE-ALARM:acIpGroupNoRouteAlarm: [KOREASBC1] IP Group is temporarily blocked. IP Group (IPG_ITSP) Blocked Reason: No Working Proxy; Severity:major; So... See more...
Hi I want to extract the highlighted part RAISE-ALARM:acIpGroupNoRouteAlarm: [KOREASBC1] IP Group is temporarily blocked. IP Group (IPG_ITSP) Blocked Reason: No Working Proxy; Severity:major; Source:Board#1/IPGroup#2; local0.warning [S=2952580] [BID=d57afa:30] RAISE-ALARM:acIpGroupNoRouteAlarm: [KOREASBC1] IP Group is temporarily blocked. IP Group (IPG_ITSP) Blocked Reason: No Working Proxy; Severity:major; Source:Board#1/IPGroup#2; Unique ID:209; Additional Info1:; [Time:29-08@17:53:05.656] 17:53:05.655 10.82.10.245 local0.warning [S=2952579] [BID=d57afa:30] RAISE-ALARM:acProxyConnectionLost: [KOREASBC1] Proxy Set Alarm Proxy Set 1 (PS_ITSP): Proxy lost. looking for another proxy; Severity:major; Source:Board#1/ProxyConnection#1; Unique ID:208; Additional Info1:; [Time:29-08@17:53:05.655]
Hi , I want to extract the color part. RAISE-ALARM:acIpGroupNoRouteAlarm: [KOREASBC1] IP Group is temporarily blocked. IP Group (IPG_ITSP) Blocked Reason: No Working Proxy; Severity:major; Sourc... See more...
Hi , I want to extract the color part. RAISE-ALARM:acIpGroupNoRouteAlarm: [KOREASBC1] IP Group is temporarily blocked. IP Group (IPG_ITSP) Blocked Reason: No Working Proxy; Severity:major; Source:Board#1/IPGroup#2; local0.warning [S=2952580] [BID=d57afa:30] RAISE-ALARM:acIpGroupNoRouteAlarm: [KOREASBC1] IP Group is temporarily blocked. IP Group (IPG_ITSP) Blocked Reason: No Working Proxy; Severity:major; Source:Board#1/IPGroup#2; Unique ID:209; Additional Info1:; [Time:29-08@17:53:05.656] 17:53:05.655 10.82.10.245 local0.warning [S=2952579] [BID=d57afa:30] RAISE-ALARM:acProxyConnectionLost: [KOREASBC1] Proxy Set Alarm Proxy Set 1 (PS_ITSP): Proxy lost. looking for another proxy; Severity:major; Source:Board#1/ProxyConnection#1; Unique ID:208; Additional Info1:; [Time:29-08@17:53:05.655]
| makeresults | eval _raw="{ \"time\": \"2024-09-19T08:03:02.234663252Z\", \"json\": { \"ts\": \"2024-09-19T15:03:02.234462341+07:00\", \"logger\": \"<anonymized>\", \"level\": \"WARN... See more...
| makeresults | eval _raw="{ \"time\": \"2024-09-19T08:03:02.234663252Z\", \"json\": { \"ts\": \"2024-09-19T15:03:02.234462341+07:00\", \"logger\": \"<anonymized>\", \"level\": \"WARN\", \"class\": \"net.ttddyy.dsproxy.support.SLF4JLogUtils\", \"method\": \"writeLog\", \"file\": \"<anonymized>\", \"line\": 26, \"thread\": \"pool-1-thread-1\", \"arguments\": {}, \"msg\": \"{\\\"name\\\":\\\"\\\", \\\"connection\\\":22234743, \\\"time\\\":20000, \\\"success\\\":false, \\\"type\\\":\\\"Prepared\\\", \\\"batch\\\":false, \\\"querySize\\\":1, \\\"batchSize\\\":0, \\\"query\\\":[\\\"select * from whatever.whatever w where w.whatever in (?,?,?) \\\"], \\\"params\\\":[[\\\"1\\\",\\\"2\\\",\\\"3\\\"]]}\", \"scope\": \"APP\" }, \"kubernetes\": { \"pod_name\": \"<anonymized>\", \"namespace_name\": \"<anonymized>\", \"labels\": { \"whatever\": \"whatever\" }, \"container_image\": \"<anonymized>\" } }" | spath json.msg output=msg | spath input=msg query{}
When running a search on the Incident Review dashboard where the search term is the <event_id> value or event_id="<event_id>", there are no results. It used to work in the past, and in one of the la... See more...
When running a search on the Incident Review dashboard where the search term is the <event_id> value or event_id="<event_id>", there are no results. It used to work in the past, and in one of the last updates, it stopped working. I am using Enterprise Security version 7.3.2
Yep you are right, I wasn't thinking about that. We can still use the following, | streamstats count as Rank | delta Score as Diff | eval Rank=if(Diff=0,null,Rank) | filldown | fields - Diff
{ "time": "2024-09-19T08:03:02.234663252Z", "json": { "ts": "2024-09-19T15:03:02.234462341+07:00", "logger": "<anonymized>", "level": "WARN", "class": "net.ttddyy.dsproxy.support.... See more...
{ "time": "2024-09-19T08:03:02.234663252Z", "json": { "ts": "2024-09-19T15:03:02.234462341+07:00", "logger": "<anonymized>", "level": "WARN", "class": "net.ttddyy.dsproxy.support.SLF4JLogUtils", "method": "writeLog", "file": "<anonymized>", "line": 26, "thread": "pool-1-thread-1", "arguments": {}, "msg": "{\"name\":\"\", \"connection\":22234743, \"time\":20000, \"success\":false, \"type\":\"Prepared\", \"batch\":false, \"querySize\":1, \"batchSize\":0, \"query\":[\"select * from whatever.whatever w where w.whatever in (?,?,?) \"], \"params\":[[\"1\",\"2\",\"3\"]]}", "scope": "APP" }, "kubernetes": { "pod_name": "<anonymized>", "namespace_name": "<anonymized>", "labels": { "whatever": "whatever" }, "container_image": "<anonymized>" } }     to begin with, I'd like to do equivallent`jq '.json.msg|fromjson|.query[0]'`. After that, eventually, do the actual parameter substitutions, deduplication, counting, min/max time, but that's way beyond of scope of this question.
I am testing the SmartStore setup on S3 with Splunk Enterprise running on an EC2 instance. I am attempting this with an IAM role that has full S3 access. When I included the access keys in indexes... See more...
I am testing the SmartStore setup on S3 with Splunk Enterprise running on an EC2 instance. I am attempting this with an IAM role that has full S3 access. When I included the access keys in indexes.conf and started the instance, SmartStore successfully started. However, when I assigned the IAM role permissions to the EC2 instance and removed the key information from indexes.conf, Splunk froze at the loading screen with indexes.conf.... Running AWS commands shows that various files from S3 are listed. Below is the indexes.conf. During the loading process, Splunk freezes and does not start. The splunkd.log shows a shutdown message at the end. If I re-enter the key information in indexes.conf, it works again. I want to operate this using the IAM role.   [default] remotePath = volume:rstore/$_index_name [volume:rstore] storageType = remote path = s3://S3バケット名 remote.s3.endpoint = https://s3.ap-northeast-1.amazonaws.com    
This is based on your local time format so you might have to change that to not include the year (do you really want to do that?)
Actually, UDP is _not_ a stream. UDP is a connectionless protocol and every datagram is independent from all other ones.
Please share the raw event that you are working on, anonymised and in a code block to preserve formatting.
I'm new to splunk and really struggle very hard with it's documentation. Everytime I try to do something, it does not work as documented. I'm pretty fluent with free tool named jq, but it requires t... See more...
I'm new to splunk and really struggle very hard with it's documentation. Everytime I try to do something, it does not work as documented. I'm pretty fluent with free tool named jq, but it requires to downloading the data from splunk to process it, which is very inconvenient to do over globe. I have some query producing jsons. I'd like to do this trivial thing. Extract data from field json.msg (trivial projection), parse them as json, then proceed further. In jq this is as hard as: '.json.msg | fromjson '  Done. Can someone advice how to do this in splunk? I tried: … | spath input=json.msg output=msg_raw path=json.msg and multiple variants of that, but it either does not compile (say if path is missing) or do nothing. … spath input=json.msg output=msg_raw path=json.msg | table msg_raw prints empty lines.  I need to do much more complex things with it(reductions/aggregations/deduplications) all trivial in jq, but even this is not doable in splunk query. How to do? Or where is valid documention showing things which works?
Hi @hazem , ok, not it's clear. Anyway,  let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Poin... See more...
Hi @hazem , ok, not it's clear. Anyway,  let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Hello @gcusello  regarding your question: There's only one not clear thing: why are you speaking of a single intermediate Forwarder? No, I have 2 forwarders, but as you know, since UDP is a stream... See more...
Hello @gcusello  regarding your question: There's only one not clear thing: why are you speaking of a single intermediate Forwarder? No, I have 2 forwarders, but as you know, since UDP is a stream, one forwarder will handle all traffic.
Hi @yuanliu  Yeah I have it set up in the same way you have shown -  I do still get results but the first two field, which should provide details of where the subnets belong just come b... See more...
Hi @yuanliu  Yeah I have it set up in the same way you have shown -  I do still get results but the first two field, which should provide details of where the subnets belong just come back as the "notfound" that I have added to the search when the subnets are not part of the lookup file (I am using a dummy subnet that is 100% present in the lookup file).  
Hi @spisiakmi , ok, is the solution ok for you? let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karm... See more...
Hi @spisiakmi , ok, is the solution ok for you? let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Hi @Iris_Pi , the first solution requires that you always have both fwname and interface fields. Ciao. Giuseppe
This sounds like a network issue - the connection from your sharepoint server to the Splunk Cloud will be going through various network elements e.g. proxies, firewalls, etc. which may be changing th... See more...
This sounds like a network issue - the connection from your sharepoint server to the Splunk Cloud will be going through various network elements e.g. proxies, firewalls, etc. which may be changing the address being used e.g. PAT and NAT such that the server the request ends up at either isn't the server you think it is or it is using a different port, hence the connection refused. Could this be the issue?
Hi gcusello, thank you for your reply. In fact there should be a saved search running on daily basis. Example: for every row in lookup table should run a query. index=myindex att1=F1 AND earliest=... See more...
Hi gcusello, thank you for your reply. In fact there should be a saved search running on daily basis. Example: for every row in lookup table should run a query. index=myindex att1=F1 AND earliest=strptime("12.09.2024", "%d.%m.%Y") | stats count as cnt index=myindex att1=F2 AND earliest=strptime("23.04.2024", "%d.%m.%Y") | stats count as cnt index=myindex att1=F3 AND earliest=strptime("15.06.2024", "%d.%m.%Y") | stats count as cnt index=myindex att1=F4 AND earliest=strptime("16.03.2024", "%d.%m.%Y") | stats count as cnt result: att1 cnt att2 att3 F1 234 1100 12.09.2024 F2 4235 1100 23.04.2024 F3 3763 1100 15.06.2024 F4 42314 1100 16.03.2024  
Variables in a macro are surrounded by dollar signs e.g. $var$. Tokens in a dashboard are also surrounded by dollar signs e.g. $token$. When a macro with variables is used in a dashboard, the dollar ... See more...
Variables in a macro are surrounded by dollar signs e.g. $var$. Tokens in a dashboard are also surrounded by dollar signs e.g. $token$. When a macro with variables is used in a dashboard, the dollar signs have to be doubled-up e.g. $$var$$ otherwise the dashboard will assume they are tokens and probably the search will be waiting on user input to give the token ($var$) a value.
I do have Splunk Enterprise license and my Splunk version is 9.1.1.  The problem I have is anyone can access this url htttps:...../en-US/config and it will show up even if the user is login or not, ... See more...
I do have Splunk Enterprise license and my Splunk version is 9.1.1.  The problem I have is anyone can access this url htttps:...../en-US/config and it will show up even if the user is login or not, like so