All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @allidoiswinboom, to help you, I need them in text format to use in regex101.com, I cannot use a screenshot| Anyway, why in your regex there's the dollar char ($)? the correct regex should be  ... See more...
Hi @allidoiswinboom, to help you, I need them in text format to use in regex101.com, I cannot use a screenshot| Anyway, why in your regex there's the dollar char ($)? the correct regex should be  REGEX = ^acl_policy_name\=\" Ciao. Giuseppe
i tried the query u provided i am receiving the alerts. not sure what i am missing  
Hi Team, Hi Splunk Team, could you guide me through the process on how to consolidate Thousand Eyes into Splunk to centralize alerts on the dashboard? Please, Share me the each and every steps to... See more...
Hi Team, Hi Splunk Team, could you guide me through the process on how to consolidate Thousand Eyes into Splunk to centralize alerts on the dashboard? Please, Share me the each and every steps to process on how to consolidate TE into Splunk. Thanks
Doesn't work on 7.3 .    Big problem managing  ipv6 networks  .  Year 2024  https://docs.splunk.com/Documentation/ES/7.3.0/Admin/Configurenewassetoridentitylist
Hi @gcusello  I have to black out certain information but see below:    Thank you for your help!
Hi @rbakeredfi, are you speaking of an Indexer or an Heavy Forwarder? Have you done a correct assignemt of resources? how many CPUs have you on this server? If you're speaking of an Indexer, have ... See more...
Hi @rbakeredfi, are you speaking of an Indexer or an Heavy Forwarder? Have you done a correct assignemt of resources? how many CPUs have you on this server? If you're speaking of an Indexer, have you a performant disk: at least 800 IOPS (better 1200)? Ciao. Giuseppe
Hi @allidoiswinboom , search the correct regex using regex101.com or, please, share some event so we can help you. Ciao. Giuseppe
@kiran_panchavat... Sorry for the late reply...we tried checking all the steps..peerlogs, license., oom issue...could not find anything wrong and all were good. So we tried a rolling restrat of the... See more...
@kiran_panchavat... Sorry for the late reply...we tried checking all the steps..peerlogs, license., oom issue...could not find anything wrong and all were good. So we tried a rolling restrat of the SH cluster ...that fixed the issue and errors were gone Thank you for your time on helping with this..
Hi @gcusello  Ok great, if that's the case, I just want to match events that start with "acl_policy_name" so I can transform the sourcetype to something else. All the events start with that so I'm n... See more...
Hi @gcusello  Ok great, if that's the case, I just want to match events that start with "acl_policy_name" so I can transform the sourcetype to something else. All the events start with that so I'm not sure what else I need to add to the REGEX?  Thank you!
Hi @allidoiswinboom, I used your regex, if you haven't any result in the search, the issue is in the regex. Ciao. Giuseppe
Hi @Harish2 , your search isn't correct, there are some syntax errors. | tstats count latest(_time) as _time WHERE index=app-idx host="*abfd*" sourcetype=app-source-logs BY host | ap... See more...
Hi @Harish2 , your search isn't correct, there are some syntax errors. | tstats count latest(_time) as _time WHERE index=app-idx host="*abfd*" sourcetype=app-source-logs BY host | appendcols [ | makeresults | eval date=strftime(_time, "%m/%d/%Y") | lookup calendsr.csv date OUTPUT type | eval type=if(isnotnull(type),type,"NotHoliday"] Anyway, not using the hours , you check only a part of the requirementas you described. Ciao. Giuseppe
Hi @gcusello  I ran that search and no results were found. So is the regex incorrect? I was just trying to match that event referenced above.  Thank you!
Hi @allidoiswinboom, please, which sourcetype ha your running this search: index = f5_cs_p_p | regex "^acl_policy_name=\"$" i addition, in the REGEX in transforms.conf, you should escape the quote... See more...
Hi @allidoiswinboom, please, which sourcetype ha your running this search: index = f5_cs_p_p | regex "^acl_policy_name=\"$" i addition, in the REGEX in transforms.conf, you should escape the quotes: REGEX = ^acl_policy_name=\"$ Ciao. Giuseppe
Hi @gcusello , Thank you so much, you gave me exactly what i want. but i tried but i don't want to add any date or hours in the query i added  them in the csv file and run the below query still i am ... See more...
Hi @gcusello , Thank you so much, you gave me exactly what i want. but i tried but i don't want to add any date or hours in the query i added  them in the csv file and run the below query still i am receiving the alerts. can you please let me know what i am missing. And i want to add time also in the csv file, and link to the query so that during mentioned time and date my alert should not trigger. please help me on that | tstats count latest(_time) as _time WHERE index=app-idx host="*abfd*" sourcetype=app-source-logs BY host |appendcols [|makeresults |eval Today=date=strftime(_time, "%m/%d/%Y") |lookup calendsr.csv date OUTPUT type |eval type=if(isnotnull)(type),type,"NotHoliday"]
Hi Team, I'm currently using Version 8.2.10 and encountered an issue today. It seems that my admin account has disappeared from USERS AND AUTHENTICATION -> Users. I'm perplexed by this occurrence an... See more...
Hi Team, I'm currently using Version 8.2.10 and encountered an issue today. It seems that my admin account has disappeared from USERS AND AUTHENTICATION -> Users. I'm perplexed by this occurrence and would appreciate any insights into why this might have happened. Additionally, I'm seeking guidance on how to prevent similar incidents in the future.
We finally got stream working - but more of a work around.  The problem is in part due to starting the UF using systemd, which allocates CPU slices for different processes.   When using systemd to st... See more...
We finally got stream working - but more of a work around.  The problem is in part due to starting the UF using systemd, which allocates CPU slices for different processes.   When using systemd to start the UF, stream fails.   Disabling start on boot, and manually starting the UF from ./slunk start, stream works. The second part is that when the UF starts, ownership of all the UF files is chowned  splunk:splunk.  This seems logical to ensure the UF runs as splunk (or splunkfwd).  However, when stream is initially installed, the set_permissions.sh changes ownership of ../Splunk_TA_stream/Linux_x86_64/streamfwd-rhel6 to root.  Starting the UF undoes this, changing ownership back to splunk.   We made streamfwwd-rhel6 immutable - which did prevent the ownership change back to splunk, but stream still failed when starting with systemd. Ultimately, we had to disable systemd, make streamfwd-rhel6 immutable (after running set_permissions.sh), then start the UF manually via /splunk start.     Splunk needs to fix this so stream works as expected without having to disable boot-start and set the immutable flag.
Hi @gcusello  No, the 'Splunk_TA_f5-bigip' app is on the DS but not the IDXs/CM. Is that something that ought to be pushed out to the CM/IDX?  We have a local app that is specific to our program wi... See more...
Hi @gcusello  No, the 'Splunk_TA_f5-bigip' app is on the DS but not the IDXs/CM. Is that something that ought to be pushed out to the CM/IDX?  We have a local app that is specific to our program with a ruleset for that f5:bigip:syslog but that is just specifying the hosts that route data to that index.  How can I check via the regex what sourcetypes the data has?  Thank you!   
Hi Splunkers I notice the same issue and wonder really why Splunk is not fixing this issue? Is seems to be an incompatibility on the VMware stack with the streamfwd service.  I use Splunk Universa... See more...
Hi Splunkers I notice the same issue and wonder really why Splunk is not fixing this issue? Is seems to be an incompatibility on the VMware stack with the streamfwd service.  I use Splunk Universalforwarder 9.1.2 and Splunk Stream 9.1.1. Specially the installation on Universalforwarders fails massively on Linux systems which makes Splunk Stream not really usable in a distributed environment with Linux systems. My streamfwd.log tells always the same error:   2024-03-08 14:59:54 INFO [139974317471680] (CaptureServer.cpp:2001) stream.CaptureServer - Starting data capture 2024-03-08 14:59:54 INFO [139974317471680] (SnifferReactor/SnifferReactor.cpp:161) stream.SnifferReactor - Starting network capture: sniffer 2024-03-08 14:59:54 ERROR [139974317471680] (SnifferReactor/PcapNetworkCapture.cpp:238) stream.NetworkCapture - SnifferReactor unrecognized link layer for device <eth0>: 253 2024-03-08 14:59:54 FATAL [139974317471680] (CaptureServer.cpp:2337) stream.CaptureServer - SnifferReactor was unable to start packet capturesniffer 2024-03-08 14:59:54 INFO [139974317471680] (CaptureServer.cpp:2362) stream.CaptureServer - Done pinging stream senders (config was updated) 2024-03-08 14:59:54 INFO [139974317471680] (main.cpp:1109) stream.main - streamfwd has started successfully (version 8.1.1 build afdcef4b) 2024-03-08 14:59:54 INFO [139974317471680] (main.cpp:1111) stream.main - web interface listening on port 8889       As you all can see, my streamfwd.conf is more or less the same as all of you have also. No matter if for example i change the ipAddr to 0.0.0.0. I always get the same error.     [streamfwd] logConfig = streamfwdlog.conf port = 8889 ipAddr = 127.0.0.1 ## --> Token HFWD httpEventCollectorToken = ba4a2b2-2544-55e3-22ft-234vt68m0szp ## --> Specify the interface streamfwdcapture.1.interface = eth0       Side remark: If I reinstall Splunk Enterprise 9.1.2 on the same server on which UniversalForwarder 9.1.2 with Splunk Stream 9.1.1 was installed, Splunk Stream works. That sounds like a bug in Splunk_TA_stream. Would be great to hear a statement of Splunk within the next weeks. Kind regards Patrick  
You were correct - I added linux_secure and now src_ip is happier than "src".
When the index pipeline begins backing up at any stage, which resources are responsible for the bottleneck. Obviously, once backed up the problem will overflow into other areas but is there a "rule" ... See more...
When the index pipeline begins backing up at any stage, which resources are responsible for the bottleneck. Obviously, once backed up the problem will overflow into other areas but is there a "rule" or anything that says if the backup is at the Parsing Pipeline then the storage IO is too low,  Merging Pipeline then the CPU is too low,  Typing Pipeline the memory is too low, or Index Pipeline it's network bandwidth, etc. I am specifically looking for info regarding a Heavy Forwarder but any help would be appreciated. *It's not as bad as the picture makes it seem, just posting for visual*