All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thank you so much. I appreciate your help @scelikok . you're awesome !
I figured out the issue. The API fields needed to be double quoted or the reference broke. I assume it has something to do with the message being a JSON object. Outside of that minor syntax issue, ... See more...
I figured out the issue. The API fields needed to be double quoted or the reference broke. I assume it has something to do with the message being a JSON object. Outside of that minor syntax issue, your solution worked. Thank you! Actually, the need for quote is because field names contain a major breaker dot (".").  I should have spotted this given that I just wrote Single Quotes, Double Quotes, or No Quotes (SPL) for people who want to confront the wonkiness of SPL's quote rules.  Here, you need to use single quote, not double quote.  And you only need it inside coalesce. (index=api source=api_call) OR index=waf | eval sessionID=coalesce(sessionID, 'message.id') | fields apiName, message.payload, sessionID,src_ip, requestHost, requestPath, requestUserAgent | stats values(*) as * by sessionID | table apiName, message.payload, sessionID, src_ip, requestHost, requestPath, requestUserAgent Though technically you can use double quotes around message.id in rename, fields, stats, and table commands, they are not necessary.  But if you use "message.id" in coalesce, sessionID will get the literal string "message.id" as value for events from index api.
Flags are specifically those strings that start with - or -- and are preceded by a space (Ex : -config basic_config.cfg or --config basic_config.cfg). We also need to handle cases where a flag is f... See more...
Flags are specifically those strings that start with - or -- and are preceded by a space (Ex : -config basic_config.cfg or --config basic_config.cfg). We also need to handle cases where a flag is followed directly by its value, and cases where the flag stands alone indicating a boolean value of true (Ex : aptlaunch clean -@cleanup -remove_all -v2.5). And values are the strings separated from the Flags with a space (Ex : -config basic_config.cfg) or separated with the flags by an = (Ex : -config=basic_config.cfg): So basically my regex should pass this test cases : 1. Basic flags with alphanumeric values: cmd: launch test -config basic_config.cfg -system test_system1 -retry 3 2. Flags with dashes and underscores in names and values: cmd: launch test -con-fig advanced-config_v2.cfg -sys_tem test_system_2 -re-try 4 3. Flags with special characters (@, ., :): cmd: launch update -email user@example.com -domain test.domain.com -port 8080 4. Standalone flags (boolean flags): cmd: launch deploy -verbose -dry_run -force 5. Flags with values including spaces (should be wrapped in quotes): cmd: launch schedule -task "Deploy task" -at "2023-07-21 10:00:00" -notify "admin@example.com" 6. Mixed alphanumeric and special character flags without values: cmd: launch clean -@cleanup -remove_all -v2.5 7. Complex flags with mixed special characters: cmd: launch start -config@version2 --custom-env "DEV-TEST" --update-rate@5min 8. Multiple flags with and without values, mixed special characters: cmd: launch run -env DEV --build-version 1.0.0 -@retry-limit 5 --log-level debug -silent 9. Flags that could be misinterpreted as values: cmd: launch execute -file script.sh -next-gen --flag -another-flag value 10. Command with no flags at all (to ensure it's skipped properly): cmd: launch execute process_without_any_flags 11. Command with flags having only special characters: cmd: launch special -@@ -##value special_value --$$$ 100 12. Flags with numeric values and mixed special characters: cmd: launch calculate -add 5 -subtract 3 --multiply@2.5 --divide@2  
Can this line be removed using the forwarder from the props.conf files?"
It's not recognized as JSON format because it isn't JSON format.  The text before the first { disqualifies it. How are you ingesting this event?  What are the inputs.conf and props.conf settings?
Hi @CarolinaHB, JSON format should be only valid JSON string. If you can send log by removing  first "Feb 5 18:50:30 10.0.30.81"  then it should be shown as a JSON.  
Hi @oussama1 , I think your problem is because of regex output. If some of your flags have no associated values from regex output, it is not possible to match flag and values. Maybe you should chang... See more...
Hi @oussama1 , I think your problem is because of regex output. If some of your flags have no associated values from regex output, it is not possible to match flag and values. Maybe you should change your regex to create an output that can be parsed by SPL. If you have an anonymous sample events , we can try to help.  
Hi @Haleem, Please try below;   index=xxxx source=*xxxxxx* | stats avg(responseTime), max(responseTime), count(eval(respStatus >=500)) as "ERRORS", count(eval(respStatus >=400 AND respStatus <500)... See more...
Hi @Haleem, Please try below;   index=xxxx source=*xxxxxx* | stats avg(responseTime), max(responseTime), count(eval(respStatus >=500)) as "ERRORS", count(eval(respStatus >=400 AND respStatus <500)) as "EXCEPTIONS", count(eval(respStatus >=200 AND respStatus <400)) as "SUCCESS" by client_id servicePath    
Hello, good mornig.  Currently, I am sending the following data, but when ingested into Splunk, it is not recognized in JSON format.       Feb 5 18:50:30 10.0.30.81 {"LogTimestamp": "Tue Feb 6... See more...
Hello, good mornig.  Currently, I am sending the following data, but when ingested into Splunk, it is not recognized in JSON format.       Feb 5 18:50:30 10.0.30.81 {"LogTimestamp": "Tue Feb 6 00:50:31 2024","Customer": "xxxxxx","SessionID": "xxxxxx","SessionType": "TTN_ASSISTANT_BROKER_STATS","SessionStatus": "TT_STATUS_AUTHENTICATED","Version": "","Platform": "","XXX": "XX-X-9888","Connector": "XXXXXXXX","ConnectorGroup": "XXX XXX XXXXXX GROUP","PrivateIP": "","PublicIP": "18.24.9.8","Latitude": 0.000000,"Longitude": 0.000000,"CountryCode": "","TimestampAuthentication": "2024-01-28T09:26:31.592Z","TimestampUnAuthentication": "","CPUUtilization": 0,"MemUtilization": 0,"ServiceCount": 0,"InterfaceDefRoute": "","DefRouteGW": "","PrimaryDNSResolver": "","HostStartTime": "0","ConnectorStartTime": "0","NumOfInterfaces": 0,"BytesRxInterface": 0,"PacketsRxInterface": 0,"ErrorsRxInterface": 0,"DiscardsRxInterface": 0,"BytesTxInterface": 0,"PacketsTxInterface": 0,"ErrorsTxInterface": 0,"DiscardsTxInterface": 0,"TotalBytesRx": 19162399,"TotalBytesTx": 16432931,"MicroTenantID": "0"}   Can you help me?  Can this line be removed using the forwarder from the props files? Regards, 
Hello @Richfez   worked on what you mentioned, but it didn't work for me.   I also tried this props.conf [source::/var/log/audit/audit.log] TRANSFORMS-null = setnull transforms.conf [setnull] ... See more...
Hello @Richfez   worked on what you mentioned, but it didn't work for me.   I also tried this props.conf [source::/var/log/audit/audit.log] TRANSFORMS-null = setnull transforms.conf [setnull] REGEX = comm="elastic.*" DEST_KEY = queue FORMAT = nullQueue Regards
its still not working so basically this is the spl i am using : | eval Aptlauncher_cmd = replace(Aptlauncher_cmd,"=", " =") | rex max_match=0 field=Aptlauncher_cmd "\s(?<flag>--?[\w\-.@|$|#]+)(?:... See more...
its still not working so basically this is the spl i am using : | eval Aptlauncher_cmd = replace(Aptlauncher_cmd,"=", " =") | rex max_match=0 field=Aptlauncher_cmd "\s(?<flag>--?[\w\-.@|$|#]+)(?:(?=\s--?)|(?=\s[\w\-.\/|$|#|\"|=])\s(?<value>[^\s]+))?" | fillnull value="true" flag value | eval flag=trim(flag, "-") | eval value=coalesce(value, "true") | where isnotnull(flag) AND flag!="" | table flag, value but I am still having the same issue true is not added since its a multi value field I guess and mvexpand doesnt separate them as pairs when I use it 
Hi @oussama1, You can add fillnull command  after your rex command; | rex max_match=0 field=Aptlauncher_cmd "\s(?<flag>--?[\w\-.@|$|#]+)(?:(?=\s--?)|(?=\s[\w\-.\/|$|#|\"|=])\s(?<value>[^\s]+))?" | ... See more...
Hi @oussama1, You can add fillnull command  after your rex command; | rex max_match=0 field=Aptlauncher_cmd "\s(?<flag>--?[\w\-.@|$|#]+)(?:(?=\s--?)|(?=\s[\w\-.\/|$|#|\"|=])\s(?<value>[^\s]+))?" | fillnull value="true" flag value  
Hi @evinasco08, You can check this document; https://docs.splunk.com/Documentation/Splunk/9.2.0/Indexer/Clusterstates  
Hi @abhi04 , Can you please try below? [sourcetype] LINE_BREAKER = PrivateKey\s+:\s+\w+([\r\n]+) SHOULD_LINEMERGE = false  
I am working with event data in Splunk where each event contains a command with multiple arguments. I'm extracting these arguments and their associated values using regex, resulting in multi-value fi... See more...
I am working with event data in Splunk where each event contains a command with multiple arguments. I'm extracting these arguments and their associated values using regex, resulting in multi-value fields within Splunk. However, I'm encountering a challenge where some arguments do not have an associated value, and for these cases, I would like to set their values to `true`. Here's the SPL I'm using for extraction: | rex max_match=0 field=Aptlauncher_cmd "\s(?<flag>--?[\w\-.@|$|#]+)(?:(?=\s--?)|(?=\s[\w\-.\/|$|#|\"|=])\s(?<value>[^\s]+))?" What I need is to refine this SPL so that after extraction, any argument without a value is automatically assigned a value of `true`. After setting the default values, I would then like to use `mvexpand` to separate each argument-value pair into its own event. Could you provide guidance on how to adjust my regex or SPL command to accomplish this within Splunk?
Will this add-on integrate with devices managed in Aruba Central as well?
Yes it does work with google cloud buckets as it is S3 compliant.  You can use S3 interoperability to create a kubernetes secret to authenticate with Smartstore. The operator's smartstore or appFrame... See more...
Yes it does work with google cloud buckets as it is S3 compliant.  You can use S3 interoperability to create a kubernetes secret to authenticate with Smartstore. The operator's smartstore or appFramework features can be used for the configuration.
These logs are collected using scripted input using .bat file it has several lines in one events , I only showed 6 lines per event but the repetion is same with more lines in between privatekey and i... See more...
These logs are collected using scripted input using .bat file it has several lines in one events , I only showed 6 lines per event but the repetion is same with more lines in between privatekey and issuer
Great.  Those two searches should be able to be easily combined into one.  Unfortunately, I've thought about this and I'm not sure I have quite enough information yet because I feel there's a *lot* ... See more...
Great.  Those two searches should be able to be easily combined into one.  Unfortunately, I've thought about this and I'm not sure I have quite enough information yet because I feel there's a *lot* still left unsaid.  So it would be great if you could describe the use case in a little more detail just using words and English, ignoring how you think the Splunk solution will be formulated. I'm guessing something like - "whenever a new gz file is created, we need to check if that file was also processed or not and send an email with that information as an alert."  That leaves as open questions how long is the time period involved how often will you have this alert scheduled for (different from the first question!) is it a 1 to 1 relationship between "create" events and and "processing" events what's the maximum time difference between those two events does it matter more if a file gets created but not processed, or does that situation matter less, or is this actually the only thing that matters do you already have the filename being extracted as a field in these two events how often do you expect the pair of messages (daily?  hourly?  hundreds per second?) The reason for so many questions is that there are quite a few ways to approach this, some may be better in certain circumstances, some may be better in others. All in all, the details matter, but I'm sure if we get good answers to those (and perhaps a sample of the two events too) that we'll get you on your way soon.  
Or is it possible that issue is related to same lookup file being referenced for next input dropdown subsequently causing issue.