Hi @Satcom9 Pls check this(just a little tweak of your rex): | makeresults | eval Message="ACCU_DILAMZ9884 Failed, cueType=Splicer, SpliceEventID=0x00000BBC, SessionID=0x1A4D3100 SV event=45470852...
See more...
Hi @Satcom9 Pls check this(just a little tweak of your rex): | makeresults | eval Message="ACCU_DILAMZ9884 Failed, cueType=Splicer, SpliceEventID=0x00000BBC, SessionID=0x1A4D3100 SV event=454708529 spot=VAF00376_i pos=1 dur=0 Result=110 No Insertion Channel Found" |rex field=Message "Result=110(?<Test>\D+)" | table Message Test but the 110 should not hard-coded.. try this instead. thanks. | makeresults | eval Message="ACCU_DILAMZ9884 Failed, cueType=Splicer, SpliceEventID=0x00000BBC, SessionID=0x1A4D3100 SV event=454708529 spot=VAF00376_i pos=1 dur=0 Result=110 No Insertion Channel Found" |rex field=Message "Result=\d\d\d\s(?<Test>.*)" | table Message Test
Hi @jm_tesla For easy understanding, lets say there are 2 files /var/log/nginx/access.log and /var/log/nginx/access1.log Inside a gzip file. When you onboard this gzip'd log to Splunk, the Splu...
See more...
Hi @jm_tesla For easy understanding, lets say there are 2 files /var/log/nginx/access.log and /var/log/nginx/access1.log Inside a gzip file. When you onboard this gzip'd log to Splunk, the Splunk engine will undo the gzip and read both files and assign source for first file as "/var/log/nginx/access.log" source for the 2nd file as "/var/log/nginx/access1.log" from the documentation - https://docs.splunk.com/Documentation/Splunk/9.3.0/Data/Monitorfilesanddirectories other than gzip, these are supported: TAR GZ BZ2 TAR.GZ and TGZ TBZ and TBZ2 ZIP Z Best Regards, Sekar
ACCU_DILAMZ9884 Failed, cueType=Splicer, SpliceEventID=0x00000BBC, SessionID=0x1A4D3100 SV event=454708529 spot=VAF00376_i pos=1 dur=0 Result=110 No Insertion Channel Found I want to extract the wor...
See more...
ACCU_DILAMZ9884 Failed, cueType=Splicer, SpliceEventID=0x00000BBC, SessionID=0x1A4D3100 SV event=454708529 spot=VAF00376_i pos=1 dur=0 Result=110 No Insertion Channel Found I want to extract the words that come after Result=XXX And not include the Result=xxx in the output. |rex field=Message "(?<Test>\bResult.*\D+)" This produces this output>>> Result=110 No Insertion Channel Found. So I want to exclude the Results=XXX
The gzip'd files are index under their own source names. They come in the query because their names match the pattern source="/var/log/nginx/access.log*". Remove the asterisk and only the one file ...
See more...
The gzip'd files are index under their own source names. They come in the query because their names match the pattern source="/var/log/nginx/access.log*". Remove the asterisk and only the one file will appear.
Hi @James.Gardner,
Thanks for following up. I think it might be best to contact Support in this case. AppDynamics is migrating our Support case handling system to Cisco Support Case Manager (SCM)...
See more...
Hi @James.Gardner,
Thanks for following up. I think it might be best to contact Support in this case. AppDynamics is migrating our Support case handling system to Cisco Support Case Manager (SCM). Read on to learn how to manage your cases. Note: The Community is currently on temporary lockdown while we deal with a spam attack. So you will not be able to reply or create any new content in the meantime.
Hi community, I am trying to connect to the DB connect app and i am constantly redirected to http://$HOST/en-US/app/splunk_app_db_connect/ftr What is the FTR and how can I get rid of thi...
See more...
Hi community, I am trying to connect to the DB connect app and i am constantly redirected to http://$HOST/en-US/app/splunk_app_db_connect/ftr What is the FTR and how can I get rid of this error or force a redirection to a DB app that will work. I tried deleting the app folder in the ($SPLUNK_HOME/etc/apps) directory and reinstalling but still getting the same error. Any assistance here will be greatly appreciated.
Suppose I have `/var/log/nginx/access.log` and then a dozen files in the same directory named like `access.log-<date>.gz`. When Splunk processes the gzip'd files, is it supposed to index them under t...
See more...
Suppose I have `/var/log/nginx/access.log` and then a dozen files in the same directory named like `access.log-<date>.gz`. When Splunk processes the gzip'd files, is it supposed to index them under the `/var/log/nginx/access.log` source? I ask because I've noticed that these gzip files show up when I query: ``` source="/var/log/nginx/access.log*" | stats count by source ``` I'd appreciate a link to docs regarding this, I couldn't find any. Thanks!
We use the Qualys Technical Add-On to pull vulnerability data into Splunk. We run it on our Inputs Data Manager We'd like to include additional fields in our data pulls, but in order to do that we ne...
See more...
We use the Qualys Technical Add-On to pull vulnerability data into Splunk. We run it on our Inputs Data Manager We'd like to include additional fields in our data pulls, but in order to do that we need to go to the setup page. When going to the setup page on the IDM it never loads and we see this data in web_service.log 2024-09-03 21:26:32,726 INFO __init__:654 - Authorization Failed: b'{"messages":[{"type":"ERROR","text":"You (user=myusername) do not have permission to perform this operation (requires capability: edit_telemetry_settings)."}]}' From what I've been told edit_telemetry_settings can only be assigned to admins, not sc_admins. So no one has access to get to the setup page. Qualys is telling me that they have others users with IDMs that are using the Qualys TA fine, but our issue has persisted across restarts, multiple environments and multiple TA versions. Can anyone confirm they can load the setup for the Qualys TA page from an IDM?
Since the new sample events don't have a method field (GET, POST, etc.), we can get rid of that part of the regex. | rex "\/rest\/(?<field1>[^\/]+)\/(?<field2>.*)"
I recently issued a "splunk set default-hostname <hostname>" on a new node I added to our search cluster. It ended up replicating etc/system/local/inputs.conf to all other members, so obviously, all ...
See more...
I recently issued a "splunk set default-hostname <hostname>" on a new node I added to our search cluster. It ended up replicating etc/system/local/inputs.conf to all other members, so obviously, all search members began logging their events with the same 'host' field. So, if I want to avoid this in the future, how do I leverage conf_replication_summary.excludelist to blacklist the file from replication? I'm thinking that it'd be something like this, but I really don't know as I've never used this flag before. [shclustering] conf_replication_summary.excludelist.inputs = etc[/\\]system[/\\]local[/\\]inputs\.conf Thank you.
Did you ever get a answer as some of the apps. i was able to install the app but once i log into splunk and try to bring up the app i get that same error message. I even chown and chmod my /opt/splun...
See more...
Did you ever get a answer as some of the apps. i was able to install the app but once i log into splunk and try to bring up the app i get that same error message. I even chown and chmod my /opt/splunk directories and get this on my Splunk Addon for AWS
I need to run Splunk Stream on some universal forwarders to capture data from a set of servers. The only way I've been able to do this is by running splunkd as root, which is not viable in production...
See more...
I need to run Splunk Stream on some universal forwarders to capture data from a set of servers. The only way I've been able to do this is by running splunkd as root, which is not viable in production. I am deploying Splunk_TA_stream 8.1.3 to the forwarders using a deployment server; forwarders are configured for boot-start. I've followed the documentation on installing the add-on and running set_permissions.sh to change the binary to run as root. However, restarting splunk reverts the permissions on the streamfwd binary and streaming fails to start, throwing the errors below. If I modify the service to run as root stream works as expected. (CaptureServer.cpp:2338) stream.CaptureServer - SnifferReactor was unable to start packet capturesniffer (SnifferReactor/PcapNetworkCapture.cpp:238) stream.NetworkCapture - SnifferReactor unrecognized link layer for device <ens192>: 253 The servers I need to stream from are all running Red Hat 9.4 on VMWare 8 using VMXNET 3 NICs. I'm aware of workarounds others have come up with, but we need a permanent solution to this problem. streamfwd app error in /var/log/splunk/streamfwd.l... - Splunk Community
May I ask another silly question, I am getting closer to what I need, if I had the following examples: /rest/Apple/1.0/ /rest/Banana/2/ /rest/structure/2.0/ How could I define a variable via...
See more...
May I ask another silly question, I am getting closer to what I need, if I had the following examples: /rest/Apple/1.0/ /rest/Banana/2/ /rest/structure/2.0/ How could I define a variable via regex to best tease out whats an apple, banana and or structure I tried what you provided below but its giving me the full log. I just need it to show: | rex "rest/***/***"(?<method>\w+) \/rest\/(?<ActionTaken>.*)"
AKA Action taken would be equal to apple, banana or structure Apple 1.0 or Banana 2 structure 2.0. (TLDR basically anything after rest/*/*/
This is a thread from so long ago and is about a long forgotten version. Nowadays collect is much more flexible, especially if you're using output_format=hec
Right _TCP_ROUTING, not _TCP_ROUTE - my typo. But again - you can't selectively forward some part of data from a particular input to a specific output. It's all or nothing. If you can live witn it th...
See more...
Right _TCP_ROUTING, not _TCP_ROUTE - my typo. But again - you can't selectively forward some part of data from a particular input to a specific output. It's all or nothing. If you can live witn it that's... something I'd still test in lab before pushing to prod.
A little background. Our organization set up hundreds of service templates when we rolled out ITSI. We're trying to clean up unwanted KPI's in these services. I have one KPI that I want off of ...
See more...
A little background. Our organization set up hundreds of service templates when we rolled out ITSI. We're trying to clean up unwanted KPI's in these services. I have one KPI that I want off of all the service templates. The manual process of navigating 1) Configuration 2) Service Monitoring 3) Service Templates 4) Search for a service 5) edit 6) click the X on the unwanted KPI 7) Save the template Propagate the change Is taking forever to do in bulk. Is there a faster way?
I read thru the Splunk docs and it seems like a UF with customized inputs.conf and output.conf file should work. If the two enterprise servers are defined in the output.conf file, than we can use the...
See more...
I read thru the Splunk docs and it seems like a UF with customized inputs.conf and output.conf file should work. If the two enterprise servers are defined in the output.conf file, than we can use the inputs conf stanza to customize the destination where various log files are sent. Just wanted to confirm before accepting this solution. _TCP_ROUTING = <comma-separated list>
* A comma-separated list of tcpout group names.
* This setting lets you selectively forward data to one or more specific indexers.
* Specify the tcpout group that the forwarder uses when forwarding the data.
The tcpout group names are defined in outputs.conf with
[tcpout:<tcpout_group_name>].
* To forward data to all tcpout group names that have been defined in
outputs.conf, set to '*' (asterisk).
* To forward data from the "_internal" index, you must explicitly set
'_TCP_ROUTING' to either "*" or a specific splunktcp target group.
* Default: The groups specified in 'defaultGroup' in [tcpout] stanza in
the outputs.conf file