All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Replace searchmatch(text1_value,"Load Balancer") with searchmatch("text1_value=\"*Load Balancer*\""), and so on.  BTW, rename is not needed for searchmatch because it accepts any syntax/shortcut that... See more...
Replace searchmatch(text1_value,"Load Balancer") with searchmatch("text1_value=\"*Load Balancer*\""), and so on.  BTW, rename is not needed for searchmatch because it accepts any syntax/shortcut that the search command accepts. (Like search, it also does case-insensitive match.)  For example, index=monitor name="Manager - Error" text2.value="*Rerun" text1.value IN ("*Load Balancer*", "*Endpoints*") earliest=-1d latest=now | stats count(eval(searchmatch("text1.value=\"*Load Balancer*\""))) AS LoadBalancer count(eval(searchmatch("text1.value = \"*Endpoints*\""))) AS Endpoints  
Token Authentication: This is definitely your best bet for security. You can create these through Splunk Web or via the API itself.  https://docs.splunk.com/Documentation/Splunk/latest/RESTREF/RESTt... See more...
Token Authentication: This is definitely your best bet for security. You can create these through Splunk Web or via the API itself.  https://docs.splunk.com/Documentation/Splunk/latest/RESTREF/RESTtoken HTTPS: Always, always use HTTPS for your API calls. It's a must for encryption. RBAC Make sure your API user or token only has the permissions it absolutely needs. Less is more when it comes to security! Create splunk roles and map accordingly. MFA: While Splunk supports MFA for user logins, it's not directly used for API calls. Instead, you'd set up MFA for the user generating the API tokens. https://docs.splunk.com/Documentation/SIM/current/User/SetupMFA SAML: If you're using SAML, you'll still use tokens for API access. SAML is more for the web interface. Tokens are usually the way to go for most scenarios. https://docs.splunk.com/Documentation/SplunkCloud/latest/Security/SetupuserauthenticationwithSplunk https://docs.splunk.com/Documentation/Splunk/9.3.1/Security/UseAuthTokens   Hope this helps!
Do you see any more verbosity in the errors in the developer tools of your browser when making the upload, such as the network tab and the javascript console? Sometimes the errors there show more inf... See more...
Do you see any more verbosity in the errors in the developer tools of your browser when making the upload, such as the network tab and the javascript console? Sometimes the errors there show more information but on the website itself you are only given a generic error message.
Is that commented line " #to exract <team-id> from source" on the same line as the regex in your transforms.conf? If so, that should be on a separate line otherwise Splunk will consider it part of th... See more...
Is that commented line " #to exract <team-id> from source" on the same line as the regex in your transforms.conf? If so, that should be on a separate line otherwise Splunk will consider it part of the regex.
      b)   This is the search, correct? No.  You keep going back to "join", which everyone advises against.  Did you see that my example uses no "join"?  Also, please read sendmail syntax as @in... See more...
      b)   This is the search, correct? No.  You keep going back to "join", which everyone advises against.  Did you see that my example uses no "join"?  Also, please read sendmail syntax as @inventsekar posted. search1 | stats count | where count > 50000 | eval this = "search 1" | append [search2 | stats count | where count > 50000 | eval this = "search 2"] | append [search3 | stats count | where count > 50000 | eval this = "search 3"] | append [search 4 | stats count | where count > 50000 | eval this = "search 4"] | stats values(this) as message | sendmail to=mybox@example.com sendresult=true message="Exceeded 50000" If you cannot use sendmail (e.g., no MTA defined for Splunk), just get rid of the command and use alert. Bottom line: "join" will get you nowhere.
The only way you can do is to use  /services/collector/raw endpoint. I understand the desire to maintain your existing Splunk setup, I would advise against using the raw endpoint (/services/collecto... See more...
The only way you can do is to use  /services/collector/raw endpoint. I understand the desire to maintain your existing Splunk setup, I would advise against using the raw endpoint (/services/collector/raw) to transform the JSON logs back into regular log format. This approach would unnecessarily increase system load and complexity. Instead, the best practice is to use the existing event endpoint (/services/collector/event) for ingesting data into Splunk. This is optimized for handling structured data like JSON and is more efficient. I recommend adjusting your alerts and dashboards to work with the new JSON structure from logging-operator. While this may require some initial effort, it's a more sustainable approach in the long run: Update your search queries to use JSON-specific commands like spath or KV_MODE=JSON to extract fields. Modify dashboards to reference the new JSON field names. Adjust alerts to use the appropriate JSON fields and structure.
Thanks for the quick reply!  That makes sense, but it doesn't seem to work. I've tried light and dark mode in Chrome and Edge. I'm not sure if there are any other settings that might override that p... See more...
Thanks for the quick reply!  That makes sense, but it doesn't seem to work. I've tried light and dark mode in Chrome and Edge. I'm not sure if there are any other settings that might override that property, but the title remains black/white.
Is it correct that you posted three (3) sets of start-end, or am I missing something?  Here is my emulation and it gets 3 durations   | makeresults format=csv data="_raw 2024-10-09T10:32:31.540+08:... See more...
Is it correct that you posted three (3) sets of start-end, or am I missing something?  Here is my emulation and it gets 3 durations   | makeresults format=csv data="_raw 2024-10-09T10:32:31.540+08:00 | INFO | 1 | | xxxxxxxxxxxxxxxxx : End View Refresh (price_vw) !!! 2024-10-09T09:32:14.000+07:00 | INFO | 1 | | xxxxxxxxxxxxxxxxx : End View Refresh (price_vw) !!! 2024-10-09T09:30:36.643+07:00 | INFO | 1 | | xxxxxxxxxxxxxxxxx : Start View Refresh (price_vw) !!! 2024-10-09T10:30:34.337+08:00 | INFO | 1 | | xxxxxxxxxxxxxxxxx : Start View Refresh (price_vw) !!! 2024-10-09T10:02:32.229+08:00 | INFO | 1 | | xxxxxxxxxxxxxxxxx : End View Refresh (price_vw) !!! 2024-10-09T10:00:42.108+08:00 | INFO | 1 | | xxxxxxxxxxxxxxxxx : Start View Refresh (price_vw) !!!" | eval _time = strptime(replace(_raw, "(\S+).*", "\1"), "%FT%T.%3N%z") | sort - _time ``` the above emulates index=* ("Start View Refresh (price_vw)" OR "End View Refresh (price_vw)") ``` | transaction endswith="End View Refresh" startswith="Start View Refresh"   _raw _time closed_txn duration eventcount field_match_sum linecount 2024-10-09T09:30:36.643+07:00 | INFO | 1 | | xxxxxxxxxxxxxxxxx : Start View Refresh (price_vw) !!! 2024-10-09T09:32:14.000+07:00 | INFO | 1 | | xxxxxxxxxxxxxxxxx : End View Refresh (price_vw) !!! 2024-10-08 19:30:36.643 1 97.357 2 0 2 2024-10-09T10:30:34.337+08:00 | INFO | 1 | | xxxxxxxxxxxxxxxxx : Start View Refresh (price_vw) !!! 2024-10-09T10:32:31.540+08:00 | INFO | 1 | | xxxxxxxxxxxxxxxxx : End View Refresh (price_vw) !!! 2024-10-08 19:30:34.337 1 117.203 2 0 2 2024-10-09T10:00:42.108+08:00 | INFO | 1 | | xxxxxxxxxxxxxxxxx : Start View Refresh (price_vw) !!! 2024-10-09T10:02:32.229+08:00 | INFO | 1 | | xxxxxxxxxxxxxxxxx : End View Refresh (price_vw) !!! 2024-10-08 19:00:42.108 1 110.121 2 0 2 Play with the emulation and compare with real data.
Yes, right you are, there is the sendCookedData=false option. It's rarely used so I forgot about that. But since you're sending raw data this way you don't get metadata associated with it.
I personally follow this doc and never had any issues. Can you please try this from scratch again. https://learn.microsoft.com/en-us/entra/identity/saas-apps/splunkenterpriseandsplunkcloud-tutoria... See more...
I personally follow this doc and never had any issues. Can you please try this from scratch again. https://learn.microsoft.com/en-us/entra/identity/saas-apps/splunkenterpriseandsplunkcloud-tutorial
Hi, I have uploaded a new version of our app and it has been in pending status for over 24hrs. There are no error in compatibility report. I'm not sure what's wrong here. Also, not sure why the new ... See more...
Hi, I have uploaded a new version of our app and it has been in pending status for over 24hrs. There are no error in compatibility report. I'm not sure what's wrong here. Also, not sure why the new version doesn't support Splunk Cloud anymore. There were no code changes in the new version. Thanks. -M
Typically you will get an email with your splunkcloud trial stack details. Subject: " Weclome to Splunk Cloud!" May be in spam? your url will look like this:  Splunk Cloud URL: https://test-p45... See more...
Typically you will get an email with your splunkcloud trial stack details. Subject: " Weclome to Splunk Cloud!" May be in spam? your url will look like this:  Splunk Cloud URL: https://test-p45.splunkcloud.com I don't think your trial version is synced with your salesforce account.  here is the link: https://docs.splunk.com/Documentation/Splunk/9.3.1/Data/Howdoyouwanttoadddata Hope this helps.   
I have been working on routing logs based on their source into different indexes. I configured below props.conf and transforms.conf on my HF, but it didn't worked. We currently follow the naming conv... See more...
I have been working on routing logs based on their source into different indexes. I configured below props.conf and transforms.conf on my HF, but it didn't worked. We currently follow the naming convention below for our CloudWatch log group names: /starflow-app-logs-<platform-name>/<team-id>/<app-name>/<app-environment-name> -------------------------------------------------------------------------- Example sources: -------------------------------------------------------------------------- us-east-1:/starflow-app-logs/sandbox/test/prod us-east-1:/starflow-app-logs-dev/sandbox/test/dev us-east-1:/starflow-app-logs-stage/sandbox/test/stage Note: We are currently receiving log data for the above use case from the us-east-1 region. -------------------------------------------------------------------------- Condition: -------------------------------------------------------------------------- If the source path contains <team-id>, logs should be routed to the respective index in Splunk. If the source path contains any <team-id>, its logs will be routed to the same <team-id>-based index, which already exists in our Splunk environment. -------------------------------------------------------------------------- props.conf -------------------------------------------------------------------------- [source::us-east-1:/starflow-app-logs*] TRANSFORMS-set_starflow_logging = new_sourcetype, route_to_teamid_index -------------------------------------------------------------------------- transforms.conf -------------------------------------------------------------------------- [new_sourcetype] REGEX = .* SOURCE_KEY = source DEST_KEY = MetaData:Sourcetype FORMAT = sourcetype::aws:kinesis:starflow WRITE_META = true [route_to_teamid_index] REGEX = us-east-1:\/starflow-app-logs(?:-[a-z]+)?\/([a-zA-Z0-9]+)\/ SOURCE_KEY = source FORMAT = index::$1 DEST_KEY = _MetaData:Index I’d be grateful for any feedback or suggestions to improve this configuration. Thanks in advance!
I checked your post. the IDP certificate is a self signed certificate. So it is not chained. So no root or intermediate certificate available 
@ITWhisperer @gcusello  Do you have any ideas on how to fix this?  Thanks
Thank you for your response. I’m limited to using the Universal Forwarder (UF) only. From various resources, I’ve learned that UF can send raw data, which works for me. I’ve successfully set up Logs... See more...
Thank you for your response. I’m limited to using the Universal Forwarder (UF) only. From various resources, I’ve learned that UF can send raw data, which works for me. I’ve successfully set up Logstash to receive syslog data from my Mac using the TCP input plugin. I’m now wondering how I can configure it to receive data based on the Source or SourceType. Alternatively, is there a way to send all data to Logstash in real-time?
This is more statement than question, but the community should be advised Splunk Universal Forwarder 9.1.2 and 9.1.5 (those I have used and witnessed this occurring on, I cannot speak to others yet) ... See more...
This is more statement than question, but the community should be advised Splunk Universal Forwarder 9.1.2 and 9.1.5 (those I have used and witnessed this occurring on, I cannot speak to others yet) does not abide by cron syntax 100% of the time. It seems to do so at a very low frequency of occurrence, which may make repeated runs of scripted inputs difficult to notice or detect that it's occurring. I feel it's important the community is aware of this as some scripts may be time sensitive or you expect the scripts to only run once in a reporting period so no deduplicating was added -- amongst other possible circumstances a double-run might cause unintended impacts. In my case I noticed it when doing a limited deploy of a Deployment Server GUID reset script. With only 8 targeted systems scheduled to run once per day it was very easy to notice that it ran twice for multiple assets. Fortunately, I'd designed this script to create a bookmark so it wouldn't run more than once so the UF could run it every day to catch new systems or name changes, so it running more than once wouldn't cause a flood of entries in the DS. In my environment I noticed the Scheduler prematurely ran the scripts roughly 2 seconds before the cron-scheduled hour-minute combination, so the following SPL could find when this occurred: index=_internal sourcetype=splunkd reschedule_ms reschedule_ms<2000 Example: Scripted Input is scheduled to run at "2 7 * * *" (07:02:00), but there was a run at 07:01:58 AND 07:02:00. It ran two seconds early, then reschedules it to occur at the next scheduled interval which happens to be the expected time (two seconds later), and the cycle repeats. It seemed to fluctuate and vary, and sometimes even resolve itself given enough time. It's difficult to know what influences it or why after a correct run the run of milliseconds added is short by ~2000, but it does happen and may be influencing your scripts too.   Reported I opened a ticket with Support, but there's no word on a permanent resolution to this less than 1% affected forwarders issue so far, so I feel letting the community know to look if that's happening -- and if it matters in your environment, because maybe a little duplication doesn't matter -- was important.   Workarounds Changing from cron syntax to number of seconds for the Interval does work as a workaround, but that isn't always what you want -- sometimes you want a specific minute in an hour, not just hourly (whenever). Adding logic into a script to check the time is another possible workaround.
Unfortunately, I'm getting error: "Error in 'EvalCommand': The arguments to the 'searchmatch' function are invalid." I've tried both solutions.
You need to provide your raw event in a code block - use this button to open a code block and paste your raw event into it so we can see exactly what you are dealing with
Sorry.... I'm going to need to combine the policyid for both logs into one.  Both do not work..  Thanks again for your help.. Call out </ index=xxx appSubLvlNam="QAA" (msgTxt="Starting Controller... See more...
Sorry.... I'm going to need to combine the policyid for both logs into one.  Both do not work..  Thanks again for your help.. Call out </ index=xxx appSubLvlNam="QAA" (msgTxt="Starting Controller=Full Action=GetFullReportAssessment data*" OR msgTxt="API=/api/full/reportAssessment/ CallStatus=Success*") | eval msgTxt "Starting Controller=Full Action=GetFullReportAssessment data={"policyId":"Q123456789","inceptionDate":"20241011","postDate":"1900-01-01T12:00:00"}" | rex "\"policyId\":\"(?<policyId>\w+)\"" | table policyId > Response </ index=xxx appSubLvlNam="QAA" (msgTxt="Starting Controller=Full Action=GetFullReportAssessment data*" OR msgTxt="API=/api/full/reportAssessment/ CallStatus=Success*") | eval msgTxt "API=/api/full/reportAssessment/ CallStatus=Success Controller=full Action=GetFullReportAssessment Duration=17 data={"policyId":"Q123456789","inceptionDate":"20241015","postDate":"1900-01-01T12:00:00"} " | rex "\"policyId\":\"(?<policyId>\w+)\"" | table policyId >