All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thanks, this is what i was hoping for. So the manual is in this case only for safety i suppose. Already tested it on a development environment and then questioned myself why taking the peers down. ... See more...
Thanks, this is what i was hoping for. So the manual is in this case only for safety i suppose. Already tested it on a development environment and then questioned myself why taking the peers down. Will continue testing.   greetz Jari
When I go to SearchHead to edit, it tells me this message (You do not have permissions to edit this configuration)
Should we assume that DB connect queries are independently performed on two days?  In other words, there is no DB connect query to tell you which names appeared yesterday and which today? In this ca... See more...
Should we assume that DB connect queries are independently performed on two days?  In other words, there is no DB connect query to tell you which names appeared yesterday and which today? In this case, you will need to save your output from yesterday for today's use.  If you don't want to offend time travel authorities, this practically means you need to save your output from today for tomorrow's use.  Something like   | inputlookup yesterday.csv ``` assume you did outputlookup yesterday ``` | rename name AS yesterday | appendcols [dbxquery connection="myDBconnect" query="select name from myDB" | outputlookup yesterday.csv ``` save for use tommorrow ``` | rename name AS today ] | where isnull(yesterday)   Here, I use inputlookup and outputlookup (or inputcsv/outputcsv) as example.  If you prefer, you can set up a separate table to store yesterday and use dbxquery/dbxoutput.  Hope this helps.
How to download and install a trial version of Splunk SOAR and MITRE Framework?
Hello, I hope below step is helpful for you. Configuring SSL for Splunk Management Port (mgmt port) on port 8089 involves a few steps.  1. Generate SSL Certificates: Use a tool like OpenSSL to ge... See more...
Hello, I hope below step is helpful for you. Configuring SSL for Splunk Management Port (mgmt port) on port 8089 involves a few steps.  1. Generate SSL Certificates: Use a tool like OpenSSL to generate SSL certificates (private key, public key, and certificate signing request). ```bash openssl req -new -newkey rsa:2048 -keyout splunk.key -out splunk.csr ``` 2. Get the Certificate Signed: Submit the `splunk.csr` to a Certificate Authority (CA) to obtain the signed SSL certificate. Once received, you should have the SSL certificate and CA's intermediate certificate. 3. Create SSL Cert Files: Combine the private key, signed certificate, and CA intermediate certificate into a single PEM file: ```bash cat splunk.key splunk.crt ca_intermediate.crt > splunk.pem ``` 4. Copy Certificates to Splunk Directory: Move the `splunk.pem` file to the `$SPLUNK_HOME/etc/auth` directory. ```bash cp splunk.pem $SPLUNK_HOME/etc/auth ``` 5. Configure Splunk Web: Edit the `web.conf` file in `$SPLUNK_HOME/etc/system/local`: ```ini [settings] enableSplunkWebSSL = true privKeyPath = $SPLUNK_HOME/etc/auth/splunk.pem serverCert = $SPLUNK_HOME/etc/auth/splunk.pem ``` 6. Restart Splunk: Restart Splunk to apply the changes: ```bash $SPLUNK_HOME/bin/splunk restart ``` Ensure Splunk starts without errors. 7. Access Splunk via HTTPS: After the restart, you should be able to access the Splunk Management Port via HTTPS using the URL: ```text https://your-splunk-server:8089 ``` Make sure to replace `your-splunk-server` with the actual server hostname or IP. Remember to keep backups of any configuration files before making changes and consult Splunk's official documentation for the specific version you are using, as configurations may vary.
If you are not a partner and you wanted to get ES then what is way?
Thank you so much. I appreciate your help @scelikok . you're awesome !
I figured out the issue. The API fields needed to be double quoted or the reference broke. I assume it has something to do with the message being a JSON object. Outside of that minor syntax issue, ... See more...
I figured out the issue. The API fields needed to be double quoted or the reference broke. I assume it has something to do with the message being a JSON object. Outside of that minor syntax issue, your solution worked. Thank you! Actually, the need for quote is because field names contain a major breaker dot (".").  I should have spotted this given that I just wrote Single Quotes, Double Quotes, or No Quotes (SPL) for people who want to confront the wonkiness of SPL's quote rules.  Here, you need to use single quote, not double quote.  And you only need it inside coalesce. (index=api source=api_call) OR index=waf | eval sessionID=coalesce(sessionID, 'message.id') | fields apiName, message.payload, sessionID,src_ip, requestHost, requestPath, requestUserAgent | stats values(*) as * by sessionID | table apiName, message.payload, sessionID, src_ip, requestHost, requestPath, requestUserAgent Though technically you can use double quotes around message.id in rename, fields, stats, and table commands, they are not necessary.  But if you use "message.id" in coalesce, sessionID will get the literal string "message.id" as value for events from index api.
Flags are specifically those strings that start with - or -- and are preceded by a space (Ex : -config basic_config.cfg or --config basic_config.cfg). We also need to handle cases where a flag is f... See more...
Flags are specifically those strings that start with - or -- and are preceded by a space (Ex : -config basic_config.cfg or --config basic_config.cfg). We also need to handle cases where a flag is followed directly by its value, and cases where the flag stands alone indicating a boolean value of true (Ex : aptlaunch clean -@cleanup -remove_all -v2.5). And values are the strings separated from the Flags with a space (Ex : -config basic_config.cfg) or separated with the flags by an = (Ex : -config=basic_config.cfg): So basically my regex should pass this test cases : 1. Basic flags with alphanumeric values: cmd: launch test -config basic_config.cfg -system test_system1 -retry 3 2. Flags with dashes and underscores in names and values: cmd: launch test -con-fig advanced-config_v2.cfg -sys_tem test_system_2 -re-try 4 3. Flags with special characters (@, ., :): cmd: launch update -email user@example.com -domain test.domain.com -port 8080 4. Standalone flags (boolean flags): cmd: launch deploy -verbose -dry_run -force 5. Flags with values including spaces (should be wrapped in quotes): cmd: launch schedule -task "Deploy task" -at "2023-07-21 10:00:00" -notify "admin@example.com" 6. Mixed alphanumeric and special character flags without values: cmd: launch clean -@cleanup -remove_all -v2.5 7. Complex flags with mixed special characters: cmd: launch start -config@version2 --custom-env "DEV-TEST" --update-rate@5min 8. Multiple flags with and without values, mixed special characters: cmd: launch run -env DEV --build-version 1.0.0 -@retry-limit 5 --log-level debug -silent 9. Flags that could be misinterpreted as values: cmd: launch execute -file script.sh -next-gen --flag -another-flag value 10. Command with no flags at all (to ensure it's skipped properly): cmd: launch execute process_without_any_flags 11. Command with flags having only special characters: cmd: launch special -@@ -##value special_value --$$$ 100 12. Flags with numeric values and mixed special characters: cmd: launch calculate -add 5 -subtract 3 --multiply@2.5 --divide@2  
Can this line be removed using the forwarder from the props.conf files?"
It's not recognized as JSON format because it isn't JSON format.  The text before the first { disqualifies it. How are you ingesting this event?  What are the inputs.conf and props.conf settings?
Hi @CarolinaHB, JSON format should be only valid JSON string. If you can send log by removing  first "Feb 5 18:50:30 10.0.30.81"  then it should be shown as a JSON.  
Hi @oussama1 , I think your problem is because of regex output. If some of your flags have no associated values from regex output, it is not possible to match flag and values. Maybe you should chang... See more...
Hi @oussama1 , I think your problem is because of regex output. If some of your flags have no associated values from regex output, it is not possible to match flag and values. Maybe you should change your regex to create an output that can be parsed by SPL. If you have an anonymous sample events , we can try to help.  
Hi @Haleem, Please try below;   index=xxxx source=*xxxxxx* | stats avg(responseTime), max(responseTime), count(eval(respStatus >=500)) as "ERRORS", count(eval(respStatus >=400 AND respStatus <500)... See more...
Hi @Haleem, Please try below;   index=xxxx source=*xxxxxx* | stats avg(responseTime), max(responseTime), count(eval(respStatus >=500)) as "ERRORS", count(eval(respStatus >=400 AND respStatus <500)) as "EXCEPTIONS", count(eval(respStatus >=200 AND respStatus <400)) as "SUCCESS" by client_id servicePath    
Hello, good mornig.  Currently, I am sending the following data, but when ingested into Splunk, it is not recognized in JSON format.       Feb 5 18:50:30 10.0.30.81 {"LogTimestamp": "Tue Feb 6... See more...
Hello, good mornig.  Currently, I am sending the following data, but when ingested into Splunk, it is not recognized in JSON format.       Feb 5 18:50:30 10.0.30.81 {"LogTimestamp": "Tue Feb 6 00:50:31 2024","Customer": "xxxxxx","SessionID": "xxxxxx","SessionType": "TTN_ASSISTANT_BROKER_STATS","SessionStatus": "TT_STATUS_AUTHENTICATED","Version": "","Platform": "","XXX": "XX-X-9888","Connector": "XXXXXXXX","ConnectorGroup": "XXX XXX XXXXXX GROUP","PrivateIP": "","PublicIP": "18.24.9.8","Latitude": 0.000000,"Longitude": 0.000000,"CountryCode": "","TimestampAuthentication": "2024-01-28T09:26:31.592Z","TimestampUnAuthentication": "","CPUUtilization": 0,"MemUtilization": 0,"ServiceCount": 0,"InterfaceDefRoute": "","DefRouteGW": "","PrimaryDNSResolver": "","HostStartTime": "0","ConnectorStartTime": "0","NumOfInterfaces": 0,"BytesRxInterface": 0,"PacketsRxInterface": 0,"ErrorsRxInterface": 0,"DiscardsRxInterface": 0,"BytesTxInterface": 0,"PacketsTxInterface": 0,"ErrorsTxInterface": 0,"DiscardsTxInterface": 0,"TotalBytesRx": 19162399,"TotalBytesTx": 16432931,"MicroTenantID": "0"}   Can you help me?  Can this line be removed using the forwarder from the props files? Regards, 
Hello @Richfez   worked on what you mentioned, but it didn't work for me.   I also tried this props.conf [source::/var/log/audit/audit.log] TRANSFORMS-null = setnull transforms.conf [setnull] ... See more...
Hello @Richfez   worked on what you mentioned, but it didn't work for me.   I also tried this props.conf [source::/var/log/audit/audit.log] TRANSFORMS-null = setnull transforms.conf [setnull] REGEX = comm="elastic.*" DEST_KEY = queue FORMAT = nullQueue Regards
its still not working so basically this is the spl i am using : | eval Aptlauncher_cmd = replace(Aptlauncher_cmd,"=", " =") | rex max_match=0 field=Aptlauncher_cmd "\s(?<flag>--?[\w\-.@|$|#]+)(?:... See more...
its still not working so basically this is the spl i am using : | eval Aptlauncher_cmd = replace(Aptlauncher_cmd,"=", " =") | rex max_match=0 field=Aptlauncher_cmd "\s(?<flag>--?[\w\-.@|$|#]+)(?:(?=\s--?)|(?=\s[\w\-.\/|$|#|\"|=])\s(?<value>[^\s]+))?" | fillnull value="true" flag value | eval flag=trim(flag, "-") | eval value=coalesce(value, "true") | where isnotnull(flag) AND flag!="" | table flag, value but I am still having the same issue true is not added since its a multi value field I guess and mvexpand doesnt separate them as pairs when I use it 
Hi @oussama1, You can add fillnull command  after your rex command; | rex max_match=0 field=Aptlauncher_cmd "\s(?<flag>--?[\w\-.@|$|#]+)(?:(?=\s--?)|(?=\s[\w\-.\/|$|#|\"|=])\s(?<value>[^\s]+))?" | ... See more...
Hi @oussama1, You can add fillnull command  after your rex command; | rex max_match=0 field=Aptlauncher_cmd "\s(?<flag>--?[\w\-.@|$|#]+)(?:(?=\s--?)|(?=\s[\w\-.\/|$|#|\"|=])\s(?<value>[^\s]+))?" | fillnull value="true" flag value  
Hi @evinasco08, You can check this document; https://docs.splunk.com/Documentation/Splunk/9.2.0/Indexer/Clusterstates  
Hi @abhi04 , Can you please try below? [sourcetype] LINE_BREAKER = PrivateKey\s+:\s+\w+([\r\n]+) SHOULD_LINEMERGE = false