All Posts

Top

All Posts

As both searches invoke the same index, there is probably not much point (unless you have a very very specific use case) to use subsearch here. Just search for index=firewall event_type=error sourc... See more...
As both searches invoke the same index, there is probably not much point (unless you have a very very specific use case) to use subsearch here. Just search for index=firewall event_type=error sourcetype=metadata enforcement_mode=block Because that's effectively what your search would do. Having said that - that is probably _not_ what you need. I'd hazard a guess that you're probably looking for something like index=firewall | stats values(event_type) as event_types values(sourcetype) as sourcetypes values(enforcement_mode) as enforcement_modes | where enforcement_mode="block"  
My row data will look like below _row= {"id":"0","severity":"Information","message":"CPW Total= 844961,SEQ Total =244881, EAS Total=1248892, VRS Total=238, CPW Remaining=74572, SEQ Remaining=22, EAS... See more...
My row data will look like below _row= {"id":"0","severity":"Information","message":"CPW Total= 844961,SEQ Total =244881, EAS Total=1248892, VRS Total=238, CPW Remaining=74572, SEQ Remaining=22, EAS Remaining =62751, VRS Remaining =0, InvetoryDate =4/15/2024 6:16:07 AM"} I want to extract fields from message and it will look like below. I tried the through rgex but I am unable to extract. Please help to create extract for    CPW Total SEQ Total EAS Total VRS Total CPW Remaining SEQ Remaining EAS Remaining VRS Remaining InvetoryDate 844961 244881 1248892 238 74572 22 62751 0 4/15/2024 6:16:07 AM  
I am not going to experience this problem because I apply a throttle per event ID, and in some cases a dedup of the ID in the query itself, and I have set the alert to look 30 min back and run every ... See more...
I am not going to experience this problem because I apply a throttle per event ID, and in some cases a dedup of the ID in the query itself, and I have set the alert to look 30 min back and run every ten but I still lose some events that do appear if I run the search.
For example, one report runs at 10 minutes past the hour, looking back 10 minutes. The next time the report runs is 15 minutes past the hour, again looking back 10 minutes. Between these two runs, th... See more...
For example, one report runs at 10 minutes past the hour, looking back 10 minutes. The next time the report runs is 15 minutes past the hour, again looking back 10 minutes. Between these two runs, there is a five minute overlap between 5 past and 10 past the hour. If you don't take account of this, you could be double counting your events.
How can I do this? Note that the forwarder is an Edge Processor and you can't touch the conf files, everything is modified in the GUI.
Could you explain to me what you mean by overlapping times?
I am trying to build some modular documentation as a Splunk app on a site with a indexer- and search head cluster.  Some of the reasoning behind this is that I spend quite some time researching exist... See more...
I am trying to build some modular documentation as a Splunk app on a site with a indexer- and search head cluster.  Some of the reasoning behind this is that I spend quite some time researching existing configuration when I'm about to make new changes .  Thus I would like to be able to create views showing me details from props, transforms and indexes on the search heads. My question is; do you see any potential pitfalls by having the configuration on search heads as well as the indexers?  Or, are there any other solution for being able to view configuration on the indexer peers from the search heads? Cheers!
@meshorer there isn't anything inbuilt, but there is a Custom Function in the community Repo called "find_related_containers" which should get you somewhere close to what you want. TBH I would recomm... See more...
@meshorer there isn't anything inbuilt, but there is a Custom Function in the community Repo called "find_related_containers" which should get you somewhere close to what you want. TBH I would recommend building your own but it can be complicated depending on how you want to define "relevant" containers.  As for the playbook logs, I am not sure where they are on-disk. I can't see anything in $PHANTOM_HOME/var/log/phantom but suspect they are somewhere on the system. 
We're running into the same (or similar) issue. We're not using appLogo but appIcon to set the app's icon. The icon AND the label are displayed in the dashboard selection page accordingly but as soon... See more...
We're running into the same (or similar) issue. We're not using appLogo but appIcon to set the app's icon. The icon AND the label are displayed in the dashboard selection page accordingly but as soon as you click to show one particular dashboard, the label disappears and only the icon stays. I can't say for sure it was not present before but several users noticed since our upgrade from 9.0.x to  9.1.x.
You are deduping 'x' so you need to understand the consequences of that. Your search is not doing any aggregations, so without knowing what combinations of Application, Action and Target_URL you hav... See more...
You are deduping 'x' so you need to understand the consequences of that. Your search is not doing any aggregations, so without knowing what combinations of Application, Action and Target_URL you have, it's impossible to know what's going on here. These 3 lines are most likely the source of your problem | mvexpand x | mvexpand y | dedup x  
Hello Champs, This message is info only and can be safely ignored. Alternatively, you can turn it off by setting the TcpInputProc log level to WARN. If you can't restart splunkd yet, simply run: $... See more...
Hello Champs, This message is info only and can be safely ignored. Alternatively, you can turn it off by setting the TcpInputProc log level to WARN. If you can't restart splunkd yet, simply run: $SPLUNK_HOME/bin/splunk set log-level TcpInputProc -level WARN To make the change persistent: * Create or edit $SPLUNK_HOME/etc/log-local.cfg * Add: category.TcpInputProc=WARN * Followed by splunkd restart.
@all When I'm trying to install and configure #otel collector to send data from agent mode to gateway collector  in #Splunk Observability cloud, I'm facing many challenges not able to connect data t... See more...
@all When I'm trying to install and configure #otel collector to send data from agent mode to gateway collector  in #Splunk Observability cloud, I'm facing many challenges not able to connect data to send agent with gateway. Can anyone guide me how to solve this issue
Hi @neilgalloway does it give any error when you save the identity? Would you please share a screenshot of the error you are receiving when trying to save the connection using that identity?
Hi @SureshkumarD would it be possible to provide some sample data to go with the search?
Hi @pgabo66 , you have to create a new field associating it to your sourcetype and using this rule: ^(?:https?:\/\/)?(?:www[0-9]*\.)?(?)(?<url_domain>[^\n:\/]+) in event.url in the field extractio... See more...
Hi @pgabo66 , you have to create a new field associating it to your sourcetype and using this rule: ^(?:https?:\/\/)?(?:www[0-9]*\.)?(?)(?<url_domain>[^\n:\/]+) in event.url in the field extraction. Ciao. Giuseppe
Hi @mahesh27  As @bowesmana said, this is a classic proving the negative issue and you can find thousands of answers in Community. In this case you have two solutions: if you have a list of hosts ... See more...
Hi @mahesh27  As @bowesmana said, this is a classic proving the negative issue and you can find thousands of answers in Community. In this case you have two solutions: if you have a list of hosts to monitor to put in a lookup (called e.g. perimeter.csv with at least one column called host), you could run something like this | tstats count WHERE index=app-logs sourcetype=app-data source=*app.logs* host IN (appdatajs01, appdatajs02, appdatajs03, appdatajs04) BY host | append [ | inputlookup perimeter.csv | eval count=0 | fields host count ] | stats sum(count) AS total by host | where total<100 If you don't have a lookup or you don't want to manage it, you could run something like this: | tstats count latest(_time) AS _time WHERE index=app-logs sourcetype=app-data source=*app.logs* host IN (appdatajs01, appdatajs02, appdatajs03, appdatajs04) earliest=-30d@d latest=now BY host | where _time<now()-3600 In this way, you have the hosts that sent logs in the last 30 days but not in the last hour (you eventually can modify the time periods). in addition the command | bin span=1m _time has no sense because you don't use time in your stats. Ciao. Giuseppe
I understand look and feel.  You can search https://ideas.splunk.com/ to see if someone is asking for this parity; if not, you can submit an idea for it.
@Cansel.OZCAN  I have analytics, but it is still not showing. Can you please tell me the query for getting the top 10 sessions by weight widget? 
This ask could have two interpretations.  The simple one is extremely simple.  Let me give you the formula first. | inputlookup pod_name_lookup where NOT [search index=abc sourcetype=kubectl ... See more...
This ask could have two interpretations.  The simple one is extremely simple.  Let me give you the formula first. | inputlookup pod_name_lookup where NOT [search index=abc sourcetype=kubectl | eval pod_name = mvindex(split(pod_name, "-"), 0) | stats values(pod_name) as pod_name] | stats dc(pod_name) as count values(pod_name) as pod_name by importance Your mock data will give you something like pod_name importance podc critical   Now, my interpretations of your use case.  First, I think your lookup table actually look like this, with pod_name as column name instead of pod_name_lookup.  Is this correct? pod_name importance poda non-critical podb critical podc critical I call the lookup name "pod_name_lookup".  Second, I interpret the "pod_name" column in the lookup table, mocked up as "poda", "podb", "podc", to be the first part of running pod names (mocked up as "poda-284489-cs834" and "podb-834hgv8-cn28s") that does not contain a dash.  If this is not how the two names match, you will need to either make the transformation, or come up with more accurate mockups. Now, I am assuming that 'importance" in lookup and events match exactly.  If you want to detect the discrepancies in "importance" as well, the search will be more complicated.
Hi @sphiwee , if you don't know very well Powershell, why do you want to use it? You can use a simple batch script or some other tool (as Ansible) or Windows GPO (surely you have a Domain Controlle... See more...
Hi @sphiwee , if you don't know very well Powershell, why do you want to use it? You can use a simple batch script or some other tool (as Ansible) or Windows GPO (surely you have a Domain Controller. Anyway, you could see this link https://docs.splunk.com/Documentation/Forwarder/9.0.1/Forwarder/InstallaWindowsuniversalforwarderfromaninstaller?_gl=1*1w7f8cx*_ga*ODAyODQ0Njg5LjE3MTIyMjE5NTg.*_ga_GS7YF8S63Y*MTcxMzI0MzQxMS40My4xLjE3MTMyNDM3MjIuNDguMC4w*_ga_5EPM2P39FV*MTcxMzI0MzM5Ni40Ny4xLjE3MTMyNDM4NzAuMC4wLjkyMTAxMDI3NQ..&_ga=2.74864917.64127089.1712221958-802844689.1712221958#Install_a_Windows_universal_forwarder_from_the_command_line for detailed instructions or the solution from this Community Champio: https://community.splunk.com/t5/Getting-Data-In/Powershell-unattended-installation/m-p/81069 Ciao. Giuseppe