All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I'm new to REX and trying to extract strings from _raw (which is actually a malformed JSON, so SPATH is not a good option either). I was able to create a REX to identify the pattern that I want (o... See more...
I'm new to REX and trying to extract strings from _raw (which is actually a malformed JSON, so SPATH is not a good option either). I was able to create a REX to identify the pattern that I want (or kind of). However, I'm having trouble establishing the correct boundaries. There is where my lack of experience with REX is showing. I cannot establish the end of my pattern correctly. I have pasted the expression that I'm using and a cleaned-up sample of the text I'm dealing with. | rex field=_raw "next\_best\_thing.+description(?<NBT>.+)topic" I thought this would identify the beginning of my pattern as next_best_thing (as it does) and the end after the first description and capture the Group (NBT) as \\\":\\\"Another quick brown fox jumps over the lazy dog.\\\"},{\\\" (just before the first topic). Naturally, a lot of clean-up would still be necessary but I would have something to work with. However, it seems that the search starts from the end of the _raw string, so the description that is being captured is in a different part and the Group becomes something completely different from what I intended to (\\\":\\\"A third quick brown fox jumps over the lazy dog\xAE Bla Bla BlaBla?\xA0 And a forth The quick brown fox jumps over the lazy dog.\\\"},{\\\"). Also, if the expression is just | rex field=_raw "next\_best\_thing.+description(?<NBT>.+)", omitting the end boundary (TOPIC), the whole pattern changes, with completely different description being used as the end boundary. And naturally the Group changes completely. The latter reinforces the impressions that the searches are being performed from the end of _raw. Is there a way to change the search direction? Or am I even more wrong / lost than I think on how to establish the boundaries for pattern and group? "BlaBla_BlaBla_condition\\\":\\\"\\\",\\\"OtherBla\\\":{\\\"description\\\":\\\"The quick brown fox jumps over the lazy dog\\\",\\\"next_best_thing\\\":[{\\\"topic\\\":\\\"Target Public\\\",\\\"description\\\":\\\"Another quick brown fox jumps over the lazy dog.\\\"},{\\\"topic\\\":\\\"Benefit to Someone\\\",\\\"description\\\":\\\"A third quick brown fox jumps over the lazy dog\xAE Bla Bla BlaBla?\xA0 And a forth The quick brown fox jumps over the lazy dog.\\\"},{\\\"topic\\\":\\\"Call to Something\\\",\\\"description\\\":\\\"The fith quick brown fox jumps over the lazy dog.\\\"}]}},\\\"componentTemplate\\\":{\\\"id\\\":\\\"tcm:999-111111-99\\\",\\\"title\\\":\\\"BlaBlaBla_Bla_Bla\\\"},\\\"ia_rendered\\\":\\\"data-slot-id=\\\\\\\"BlaBlaBla\\\\\\\" lang=\\\\\\\"en\\\\\\\" data-offer-id=\\\\\\\"BLABLABLABLABLABLA\\\\\\\" \\\"}\",\"Rank\":\"1\"},\"categoryName\":\"\",\"source\":\"BLA\",\"name\":\"OTHETHINGSHERE_\",\"type\":null,\"placementName\":\"tvprimary\",\"presentationOrderWitinSlot\":1,\"productDetails\":{\"computerApplicationCode\":null,\"productCode\":\"BLA\",\"productSubCode\":\"\"},\"locationProductCode\":null,\"locationProductSubCode\":null,\"priorityWithInProductAndSubCode\":null}],\"error\":null},\"custSessionAvailable\":false},\"ecprFailed\":false,\"svtException\":null}"
HIi @ITWhisperer  index=foo sourcetype=json_foo source="az-foo" |rename tags.envi as env |search env="*A00001*" OR env="*A00002*" OR env="*A00005*" OR env="*A00020*" |table env from the fields... See more...
HIi @ITWhisperer  index=foo sourcetype=json_foo source="az-foo" |rename tags.envi as env |search env="*A00001*" OR env="*A00002*" OR env="*A00005*" OR env="*A00020*" |table env from the fields i am using: env="*A00001*" as "PBC" env="*A00002*" as "PBC" env="*A00005*" as "KCG env="*A00020*" as "TTK" reference:   From this SPL, i am trying to create a table like ------------------------------------------------------ PBC           |            KCG           |           TTK ------------------------------------------------------- all values       all values                 all values count                count                       count  
Thank you for that suggestion.  Now I'm even more confused.  The events are coming in as sourcetype=cef, and there are a lot more differences than I would have expected, including what's in system/de... See more...
Thank you for that suggestion.  Now I'm even more confused.  The events are coming in as sourcetype=cef, and there are a lot more differences than I would have expected, including what's in system/default...  I've got some digging to do.
Thank @isoutamo , Yes, I update email password via gui, and reboot splunk, but this problem is still present
Quite probably your key for encrypt has changed? You could try to update old email password via gui and encrypt it again.
Hello all, I have a problem with my configuration smtp. When I send e-mail I get this error : 2024-02-14 16:44:15,213 +0100 ERROR cli_common:482 - Failed to decrypt value: ************************... See more...
Hello all, I have a problem with my configuration smtp. When I send e-mail I get this error : 2024-02-14 16:44:15,213 +0100 ERROR cli_common:482 - Failed to decrypt value: ***************************=, error: Read custom key data size=30 Someone has an idea?
Hello all, I have a problem with my configuration smtp. When I send e-mail I get this error : 2024-02-14 16:44:15,213 +0100 ERROR cli_common:482 - Failed to decrypt value: ************************... See more...
Hello all, I have a problem with my configuration smtp. When I send e-mail I get this error : 2024-02-14 16:44:15,213 +0100 ERROR cli_common:482 - Failed to decrypt value: ***************************=, error: Read custom key data size=30 Someone has an idea?
Maybe this is worth of own idea in ideas.splunk.com?
Hello all, I have a problem with my configuration smtp. When I send e-mail I get this error : 2024-02-14 16:44:15,213 +0100 ERROR cli_common:482 - Failed to decrypt value: ********************... See more...
Hello all, I have a problem with my configuration smtp. When I send e-mail I get this error : 2024-02-14 16:44:15,213 +0100 ERROR cli_common:482 - Failed to decrypt value: ***************************=, error: Read custom key data size=30 Someone has an idea?
Hi, I had an add-on built using add-on builder  last year and it was working. In January I rebuilt it using the latest version of Add-on builder and it started failing with  CERTIFICATE_VERIFY_FAIL... See more...
Hi, I had an add-on built using add-on builder  last year and it was working. In January I rebuilt it using the latest version of Add-on builder and it started failing with  CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate I did not made any change on our add-on other than adding some extra logs. Does anyone know what changed in Add-On builder latest 4.1.4 version that it started failing? I will appreciate any help in troubleshooting this issue
Thank you for your help!
If the sourcetype cannot be changed then the custom app should specify its props using source:: or host::.
Hi @snobyink, in this case, please try this regex instead the previus one: ^\w+\s+\d+\s+\d+:\d+:\d+\s+(?<host>\w+).*user\s(?<user>\w+)+ that you can test at https://regex101.com/r/bV4B9h/1 Ciao. ... See more...
Hi @snobyink, in this case, please try this regex instead the previus one: ^\w+\s+\d+\s+\d+:\d+:\d+\s+(?<host>\w+).*user\s(?<user>\w+)+ that you can test at https://regex101.com/r/bV4B9h/1 Ciao. Giuseppe
I am sorry this didn't work for me and I tried to get it to work. But I already have a solution.
Thanks! Unfortunately the hostname is not extracted as a field. How do we extract host as well from the output? In the meantime we are looking to see if we can install this Add On if we can get past ... See more...
Thanks! Unfortunately the hostname is not extracted as a field. How do we extract host as well from the output? In the meantime we are looking to see if we can install this Add On if we can get past the red tape
Thank you for explaining this. I didn't know about this syntax.
I guess Splunk 9.x defauls to systemd again. Any way to revert to init.d?
Hi @jmrubio , you mainly have to create an index on the indexers. Then, if you like but itisn't mandatory, you can also create an index on the HF, but only to have the index in the dropdowns, this ... See more...
Hi @jmrubio , you mainly have to create an index on the indexers. Then, if you like but itisn't mandatory, you can also create an index on the HF, but only to have the index in the dropdowns, this index will never be used. Ciao. Giuseppe
Hi @Mariam001 , ok, let me know. ciao. Giuseppe
I have now done some additional research and testing.   I am using Alpine Linux which does not include systemd. That is probably why this is not working for me.     8e23f2b85b3a:/# "/opt/splunkf... See more...
I have now done some additional research and testing.   I am using Alpine Linux which does not include systemd. That is probably why this is not working for me.     8e23f2b85b3a:/# "/opt/splunkforwarder/bin/splunk" start --accept-license --answer-yes --no-prompt Warning: Attempting to revert the SPLUNK_HOME ownership Warning: Executing "chown -R splunk:splunk /opt/splunkforwarder" This appears to be your first time running this version of Splunk. Creating unit file... Error calling execve(): No such file or directory Error launching command: No such file or directory Failed to create the unit file. Please do it manually later. Splunk> The Notorious B.I.G. D.A.T.A. Checking prerequisites... Checking mgmt port [8089]: open Creating: /opt/splunkforwarder/var/lib/splunk Creating: /opt/splunkforwarder/var/run/splunk Creating: /opt/splunkforwarder/var/run/splunk/appserver/i18n Creating: /opt/splunkforwarder/var/run/splunk/appserver/modules/static/css Creating: /opt/splunkforwarder/var/run/splunk/upload Creating: /opt/splunkforwarder/var/run/splunk/search_telemetry Creating: /opt/splunkforwarder/var/run/splunk/search_log Creating: /opt/splunkforwarder/var/spool/splunk Creating: /opt/splunkforwarder/var/spool/dirmoncache Creating: /opt/splunkforwarder/var/lib/splunk/authDb Creating: /opt/splunkforwarder/var/lib/splunk/hashDb Checking conf files for problems... Done Checking default conf files for edits... Validating installed files against hashes from '/opt/splunkforwarder/splunkforwarder-9.1.2-b6b9c8185839-linux-2.6-x86_64-manifest' All installed files intact. Done All preliminary checks passed. Starting splunk server daemon (splunkd)... PYTHONHTTPSVERIFY is set to 0 in splunk-launch.conf disabling certificate validation for the httplib and urllib libraries shipped with the embedded Python interpreter; must be set to "1" for increased security However it seems to start a background process but I dont see the logs in splunk. Using the status command kills the background process:   8e23f2b85b3a:/# "/opt/splunkforwarder/bin/splunk" status Warning: Attempting to revert the SPLUNK_HOME ownership Warning: Executing "chown -R splunk:splunk /opt/splunkforwarder" splunkd 165 was not running. Stopping splunk helpers...   I have tried disabling boot start: splunk disable boot-start But that gives me a similar error: Error calling execve(): No such file or directory Error launching command: No such file or directory execve: No such file or directory while running command /sbin/chkconfig   Has something changed from 8.x to 9.x that now systemd is used default somehow? How can I run the universal forwarder without systemd?