All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

i have had an error in the email logs and i have fized it and now im receiving emails correctly in my local mail server but still the alert when triggered it's not shown in the triggered alerts 
From the security standpoint token authentication doesn't differ from user/password authentication. It's still authentication with a static secret. You can't use SAML for REST API authentication. Y... See more...
From the security standpoint token authentication doesn't differ from user/password authentication. It's still authentication with a static secret. You can't use SAML for REST API authentication. You might want to think about integrating an external credentials provider like Conjur and rotating the tokens often
I don't think that's the issue here. The same payload sent to the /raw endpoint would end up looking the same. It's the source formatting the data differently than before.
That's one of the options. But "*/..." makes no sense. It's enough to just use ...
raw endpoint is not an option because it is not supported by the logging-operator
Thanks @PickleRick for suggestion.  Shall I use below config? [source::.../starflow-app-logs*/...]
* matches anything but the path separator 0 or more times. The path separator is '/' on unix, or '\' on Windows. Intended to match a partial or complete directory or filename. So for your ... See more...
* matches anything but the path separator 0 or more times. The path separator is '/' on unix, or '\' on Windows. Intended to match a partial or complete directory or filename. So for your props.conf stanza you should rather use ... recurses through directories until the match is met or equivalently, matches any number of characters.  
Nope, I only included it for clarity while writing this post; it’s not part of my actual configuration. Note: I have removed that part from my post as well.
Replace searchmatch(text1_value,"Load Balancer") with searchmatch("text1_value=\"*Load Balancer*\""), and so on.  BTW, rename is not needed for searchmatch because it accepts any syntax/shortcut that... See more...
Replace searchmatch(text1_value,"Load Balancer") with searchmatch("text1_value=\"*Load Balancer*\""), and so on.  BTW, rename is not needed for searchmatch because it accepts any syntax/shortcut that the search command accepts. (Like search, it also does case-insensitive match.)  For example, index=monitor name="Manager - Error" text2.value="*Rerun" text1.value IN ("*Load Balancer*", "*Endpoints*") earliest=-1d latest=now | stats count(eval(searchmatch("text1.value=\"*Load Balancer*\""))) AS LoadBalancer count(eval(searchmatch("text1.value = \"*Endpoints*\""))) AS Endpoints  
Token Authentication: This is definitely your best bet for security. You can create these through Splunk Web or via the API itself.  https://docs.splunk.com/Documentation/Splunk/latest/RESTREF/RESTt... See more...
Token Authentication: This is definitely your best bet for security. You can create these through Splunk Web or via the API itself.  https://docs.splunk.com/Documentation/Splunk/latest/RESTREF/RESTtoken HTTPS: Always, always use HTTPS for your API calls. It's a must for encryption. RBAC Make sure your API user or token only has the permissions it absolutely needs. Less is more when it comes to security! Create splunk roles and map accordingly. MFA: While Splunk supports MFA for user logins, it's not directly used for API calls. Instead, you'd set up MFA for the user generating the API tokens. https://docs.splunk.com/Documentation/SIM/current/User/SetupMFA SAML: If you're using SAML, you'll still use tokens for API access. SAML is more for the web interface. Tokens are usually the way to go for most scenarios. https://docs.splunk.com/Documentation/SplunkCloud/latest/Security/SetupuserauthenticationwithSplunk https://docs.splunk.com/Documentation/Splunk/9.3.1/Security/UseAuthTokens   Hope this helps!
Do you see any more verbosity in the errors in the developer tools of your browser when making the upload, such as the network tab and the javascript console? Sometimes the errors there show more inf... See more...
Do you see any more verbosity in the errors in the developer tools of your browser when making the upload, such as the network tab and the javascript console? Sometimes the errors there show more information but on the website itself you are only given a generic error message.
Is that commented line " #to exract <team-id> from source" on the same line as the regex in your transforms.conf? If so, that should be on a separate line otherwise Splunk will consider it part of th... See more...
Is that commented line " #to exract <team-id> from source" on the same line as the regex in your transforms.conf? If so, that should be on a separate line otherwise Splunk will consider it part of the regex.
      b)   This is the search, correct? No.  You keep going back to "join", which everyone advises against.  Did you see that my example uses no "join"?  Also, please read sendmail syntax as @in... See more...
      b)   This is the search, correct? No.  You keep going back to "join", which everyone advises against.  Did you see that my example uses no "join"?  Also, please read sendmail syntax as @inventsekar posted. search1 | stats count | where count > 50000 | eval this = "search 1" | append [search2 | stats count | where count > 50000 | eval this = "search 2"] | append [search3 | stats count | where count > 50000 | eval this = "search 3"] | append [search 4 | stats count | where count > 50000 | eval this = "search 4"] | stats values(this) as message | sendmail to=mybox@example.com sendresult=true message="Exceeded 50000" If you cannot use sendmail (e.g., no MTA defined for Splunk), just get rid of the command and use alert. Bottom line: "join" will get you nowhere.
The only way you can do is to use  /services/collector/raw endpoint. I understand the desire to maintain your existing Splunk setup, I would advise against using the raw endpoint (/services/collecto... See more...
The only way you can do is to use  /services/collector/raw endpoint. I understand the desire to maintain your existing Splunk setup, I would advise against using the raw endpoint (/services/collector/raw) to transform the JSON logs back into regular log format. This approach would unnecessarily increase system load and complexity. Instead, the best practice is to use the existing event endpoint (/services/collector/event) for ingesting data into Splunk. This is optimized for handling structured data like JSON and is more efficient. I recommend adjusting your alerts and dashboards to work with the new JSON structure from logging-operator. While this may require some initial effort, it's a more sustainable approach in the long run: Update your search queries to use JSON-specific commands like spath or KV_MODE=JSON to extract fields. Modify dashboards to reference the new JSON field names. Adjust alerts to use the appropriate JSON fields and structure.
Thanks for the quick reply!  That makes sense, but it doesn't seem to work. I've tried light and dark mode in Chrome and Edge. I'm not sure if there are any other settings that might override that p... See more...
Thanks for the quick reply!  That makes sense, but it doesn't seem to work. I've tried light and dark mode in Chrome and Edge. I'm not sure if there are any other settings that might override that property, but the title remains black/white.
Is it correct that you posted three (3) sets of start-end, or am I missing something?  Here is my emulation and it gets 3 durations   | makeresults format=csv data="_raw 2024-10-09T10:32:31.540+08:... See more...
Is it correct that you posted three (3) sets of start-end, or am I missing something?  Here is my emulation and it gets 3 durations   | makeresults format=csv data="_raw 2024-10-09T10:32:31.540+08:00 | INFO | 1 | | xxxxxxxxxxxxxxxxx : End View Refresh (price_vw) !!! 2024-10-09T09:32:14.000+07:00 | INFO | 1 | | xxxxxxxxxxxxxxxxx : End View Refresh (price_vw) !!! 2024-10-09T09:30:36.643+07:00 | INFO | 1 | | xxxxxxxxxxxxxxxxx : Start View Refresh (price_vw) !!! 2024-10-09T10:30:34.337+08:00 | INFO | 1 | | xxxxxxxxxxxxxxxxx : Start View Refresh (price_vw) !!! 2024-10-09T10:02:32.229+08:00 | INFO | 1 | | xxxxxxxxxxxxxxxxx : End View Refresh (price_vw) !!! 2024-10-09T10:00:42.108+08:00 | INFO | 1 | | xxxxxxxxxxxxxxxxx : Start View Refresh (price_vw) !!!" | eval _time = strptime(replace(_raw, "(\S+).*", "\1"), "%FT%T.%3N%z") | sort - _time ``` the above emulates index=* ("Start View Refresh (price_vw)" OR "End View Refresh (price_vw)") ``` | transaction endswith="End View Refresh" startswith="Start View Refresh"   _raw _time closed_txn duration eventcount field_match_sum linecount 2024-10-09T09:30:36.643+07:00 | INFO | 1 | | xxxxxxxxxxxxxxxxx : Start View Refresh (price_vw) !!! 2024-10-09T09:32:14.000+07:00 | INFO | 1 | | xxxxxxxxxxxxxxxxx : End View Refresh (price_vw) !!! 2024-10-08 19:30:36.643 1 97.357 2 0 2 2024-10-09T10:30:34.337+08:00 | INFO | 1 | | xxxxxxxxxxxxxxxxx : Start View Refresh (price_vw) !!! 2024-10-09T10:32:31.540+08:00 | INFO | 1 | | xxxxxxxxxxxxxxxxx : End View Refresh (price_vw) !!! 2024-10-08 19:30:34.337 1 117.203 2 0 2 2024-10-09T10:00:42.108+08:00 | INFO | 1 | | xxxxxxxxxxxxxxxxx : Start View Refresh (price_vw) !!! 2024-10-09T10:02:32.229+08:00 | INFO | 1 | | xxxxxxxxxxxxxxxxx : End View Refresh (price_vw) !!! 2024-10-08 19:00:42.108 1 110.121 2 0 2 Play with the emulation and compare with real data.
Yes, right you are, there is the sendCookedData=false option. It's rarely used so I forgot about that. But since you're sending raw data this way you don't get metadata associated with it.
I personally follow this doc and never had any issues. Can you please try this from scratch again. https://learn.microsoft.com/en-us/entra/identity/saas-apps/splunkenterpriseandsplunkcloud-tutoria... See more...
I personally follow this doc and never had any issues. Can you please try this from scratch again. https://learn.microsoft.com/en-us/entra/identity/saas-apps/splunkenterpriseandsplunkcloud-tutorial
Hi, I have uploaded a new version of our app and it has been in pending status for over 24hrs. There are no error in compatibility report. I'm not sure what's wrong here. Also, not sure why the new ... See more...
Hi, I have uploaded a new version of our app and it has been in pending status for over 24hrs. There are no error in compatibility report. I'm not sure what's wrong here. Also, not sure why the new version doesn't support Splunk Cloud anymore. There were no code changes in the new version. Thanks. -M
Typically you will get an email with your splunkcloud trial stack details. Subject: " Weclome to Splunk Cloud!" May be in spam? your url will look like this:  Splunk Cloud URL: https://test-p45... See more...
Typically you will get an email with your splunkcloud trial stack details. Subject: " Weclome to Splunk Cloud!" May be in spam? your url will look like this:  Splunk Cloud URL: https://test-p45.splunkcloud.com I don't think your trial version is synced with your salesforce account.  here is the link: https://docs.splunk.com/Documentation/Splunk/9.3.1/Data/Howdoyouwanttoadddata Hope this helps.