All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I can see a little more information is need:  I can finde the user who checks out the cyberark account .. "PAM-DomainAdmin*"  by this search ... It is the suser that is of interesset  index="cybera... See more...
I can see a little more information is need:  I can finde the user who checks out the cyberark account .. "PAM-DomainAdmin*"  by this search ... It is the suser that is of interesset  index="cyberark" duser="PAM-DomainAdmin*" ("cn2=(Action: Connect)" OR command="Retrieve password") | rename cn2 as Why | table _time, user, suser, src, command, Why Finding the action created by the user has done as PAM-admin is done as: index=wineventlog source=wineventlog:security EventID IN (4756, 4728, 4732) src_user="PAM-DomainAdmin*"  
yes, if I run a search index=imperva I do see all the fields
Hi @dm2, are these fields shared at app or Global level? they must be at Global level to be viewed in this ad in the other apps. Ciao. Giuseppe
Hi,  I installed SA_CIM_Vladiator and when running % checks to see DM coverage I do see gaps between extracted fields or fields that are found on specific indexes and the app does not return them in... See more...
Hi,  I installed SA_CIM_Vladiator and when running % checks to see DM coverage I do see gaps between extracted fields or fields that are found on specific indexes and the app does not return them in the results        
You need to get  your stats into the same events. Try something like this index="disk" sourcetype="Perfmon:disk" | bin span=10m _time | eval time=strftime(_time, "%H:%M:%S") | rename Value as Dis... See more...
You need to get  your stats into the same events. Try something like this index="disk" sourcetype="Perfmon:disk" | bin span=10m _time | eval time=strftime(_time, "%H:%M:%S") | rename Value as Disque | append [ search index="mem" sourcetype="Perfmon:mem" | bin span=10m _time | eval time=strftime(_time, "%H:%M:%S") | rename Value as Mémoire] | stats avg(Disque) as Disque avg(Mémoire) as Mémoire by time | eval Disque=round(Disque, 2) | eval Mémoire=round(Mémoire, 2)
This is a different question - try searching answers for a relevant topic
Hi @vegarberget, this app is mainly used as support for the Machine Learning Tool Kit App (https://splunkbase.splunk.com/app/2890), I neves saw this ass used by itself. Ciao. Giuseppe
Do you mean client_ip is no longer returned just because index _ad is included in OR phrase?  Does the following return client? (index=_ad (EventCode=4625 OR (EventCode=4771 Failure_Code=0x18)) Acco... See more...
Do you mean client_ip is no longer returned just because index _ad is included in OR phrase?  Does the following return client? (index=_ad (EventCode=4625 OR (EventCode=4771 Failure_Code=0x18)) Account_Name=JohnDoe Source_Network_Address IN (10.10.10.10 20.20.20.20)) OR (index=_network snat IN (10.10.10.10*,20.20.20.20*)) ``` get relevant data ``` | rex field=client "^(?<client_ip>.*?)\:(?<client_port>.*)" ``` this applies to index _network ``` | dedup client client_ip | table client client_ip  
Hello, Do anyone have a quick howto on using this application. With examples?
Assuming regex is working, and you have successfully extracted all flag's.  fillnull and coalesce does not help because they only take effect when value is completely void. In olden days, people use... See more...
Assuming regex is working, and you have successfully extracted all flag's.  fillnull and coalesce does not help because they only take effect when value is completely void. In olden days, people use mvzip-split string maneuvers to handle these conditions.  But since 8.2, Splunk added a set of JSON functions so you can represent data structure more expressively.  The following is a possibility:   | rex max_match=0 field=Aptlauncher_cmd "\s(?<flag>--?[\w\-.@|$|#]+)(?:(?=\s--?)|(?=\s[\w\-.\/|$|#|\"|=])\s(?<value>[^\s]+))?" | eval flag=trim(flag, "-") | eval flagidx = mvrange(0, mvcount(flag)) | eval compact = mvmap(flagidx, json_object("flag", mvindex(flag, flagidx), "value", mvindex(value, flagidx))) | fields - flagidx | mvexpand compact | eval value = json_extract(compact, "value"), flag = json_extract(compact, "flag") | eval value = if(isnull(flag), null(), coalesce(value, "true"))   Using the examples you gave, here is an emulation you can play with and compare with real data.   | makeresults format=csv data="Aptlauncher_cmd launch test -config basic_config.cfg -system test_system1 -retry 3 launch test -con-fig advanced-config_v2.cfg -sys_tem test_system_2 -re-try 4 launch update -email user@example.com -domain test.domain.com -port 8080 launch deploy -verbose -dry_run -force launch schedule -task \"Deploy task\" -at \"2023-07-21 10:00:00\" -notify \"admin@example.com\" launch clean -@cleanup -remove_all -v2.5 launch start -config@version2 --custom-env "DEV-TEST" --update-rate@5min launch run -env DEV --build-version 1.0.0 -@retry-limit 5 --log-level debug -silent launch execute -file script.sh -next-gen --flag -another-flag value launch execute process_without_any_flags launch special -@@ -##value special_value --$$$ 100 launch calculate -add 5 -subtract 3 --multiply@2.5 --divide@2" ``` data emulation above ```   Emulative results are Aptlauncher_cmd flag value launch test -config basic_config.cfg -system test_system1 -retry 3 config basic_config.cfg launch test -config basic_config.cfg -system test_system1 -retry 3 system test_system1 launch test -config basic_config.cfg -system test_system1 -retry 3 retry 3 launch test -con-fig advanced-config_v2.cfg -sys_tem test_system_2 -re-try 4 con-fig advanced-config_v2.cfg launch test -con-fig advanced-config_v2.cfg -sys_tem test_system_2 -re-try 4 sys_tem test_system_2 launch test -con-fig advanced-config_v2.cfg -sys_tem test_system_2 -re-try 4 re-try 4 launch update -email user@example.com -domain test.domain.com -port 8080 email user@example.com launch update -email user@example.com -domain test.domain.com -port 8080 domain test.domain.com launch update -email user@example.com -domain test.domain.com -port 8080 port 8080 launch deploy -verbose -dry_run -force verbose true launch deploy -verbose -dry_run -force dry_run true launch deploy -verbose -dry_run -force force true launch schedule -task "Deploy task" -at "2023-07-21 10:00:00" -notify "admin@example.com" task "Deploy launch schedule -task "Deploy task" -at "2023-07-21 10:00:00" -notify "admin@example.com" at "2023-07-21 launch schedule -task "Deploy task" -at "2023-07-21 10:00:00" -notify "admin@example.com" notify "admin@example.com" launch clean -@cleanup -remove_all -v2.5 @cleanup true launch clean -@cleanup -remove_all -v2.5 remove_all true launch clean -@cleanup -remove_all -v2.5 v2.5 true launch start -config@version2 --custom-env DEV-TEST --update-rate@5min config@version2 DEV-TEST launch start -config@version2 --custom-env DEV-TEST --update-rate@5min custom-env true launch start -config@version2 --custom-env DEV-TEST --update-rate@5min update-rate@5min true launch run -env DEV --build-version 1.0.0 -@retry-limit 5 --log-level debug -silent env DEV launch run -env DEV --build-version 1.0.0 -@retry-limit 5 --log-level debug -silent build-version 1.0.0 launch run -env DEV --build-version 1.0.0 -@retry-limit 5 --log-level debug -silent @retry-limit 5 launch run -env DEV --build-version 1.0.0 -@retry-limit 5 --log-level debug -silent log-level debug launch run -env DEV --build-version 1.0.0 -@retry-limit 5 --log-level debug -silent silent true launch execute -file script.sh -next-gen --flag -another-flag value file script.sh launch execute -file script.sh -next-gen --flag -another-flag value next-gen value launch execute -file script.sh -next-gen --flag -another-flag value flag true launch execute -file script.sh -next-gen --flag -another-flag value another-flag true launch execute process_without_any_flags     launch special -@@ -##value special_value --$$$ 100 @@ special_value launch special -@@ -##value special_value --$$$ 100 ##value 100 launch special -@@ -##value special_value --$$$ 100 $$$ true launch calculate -add 5 -subtract 3 --multiply@2.5 --divide@2 add 5 launch calculate -add 5 -subtract 3 --multiply@2.5 --divide@2 subtract 3 launch calculate -add 5 -subtract 3 --multiply@2.5 --divide@2 multiply@2.5 true launch calculate -add 5 -subtract 3 --multiply@2.5 --divide@2 divide@2 true Hope this helps.  (As you can see, the regex has some difficulty with quoted values.)
hello I need to display 2 curves in my line chart from two different index so i am doing this : index="disk" sourcetype="Perfmon:disk" | bin span=10m _time | eval time=strftime(_time, "%H:%M:%S"... See more...
hello I need to display 2 curves in my line chart from two different index so i am doing this : index="disk" sourcetype="Perfmon:disk" | bin span=10m _time | eval time=strftime(_time, "%H:%M:%S") | stats avg(Value) as Disque by time | eval Disque=round(Disque, 2) | append [ search index="mem" sourcetype="Perfmon:mem" | bin span=10m _time | eval time=strftime(_time, "%H:%M:%S") | stats avg(Value) as Mémoire by time | eval Mémoire=round(Mémoire, 2)] the problem I have is that on the x axis my curves are not aligned on the same time slot what is wrong please? thanks
Hi @manas, at first you culd simplify your searches: e.g. the first: | inputlookup metadata.csv | dedup service | sort service | table service then in the second dropdown you could use the resul... See more...
Hi @manas, at first you culd simplify your searches: e.g. the first: | inputlookup metadata.csv | dedup service | sort service | table service then in the second dropdown you could use the result of the first to filter results in this way: <input type="dropdown" token="Service" searchWhenChanged="true"> <label>Service</label> <search> <query> | inputlookup metadata.csv | dedup service | sort service | table service </query> </search> <choice value="*">*</choice> <default>*</default> <initialValue>*</initialValue> </input> <input type="dropdown" token="Entity" searchWhenChanged="true"> <label>Entity</label> <search> <query> | inputlookup metadata.csv WHERE Service="$Service$" | dedup entity | sort entity | table entity </query> </search> <choice value="*">*</choice> <default>*</default> <initialValue>*</initialValue> </input> Ciao. Giuseppe
Something like this? index=atp category="AdvancedHunting-DeviceFileEvents" ( (properties.InitiatingProcessAccountName!="system" properties.ActionType="FileCreated" properties.FolderPath!="C:\\*" p... See more...
Something like this? index=atp category="AdvancedHunting-DeviceFileEvents" ( (properties.InitiatingProcessAccountName!="system" properties.ActionType="FileCreated" properties.FolderPath!="C:\\*" properties.FolderPath!="\\*") OR (category="AdvancedHunting-DeviceEvents" properties.ActionType="UsbDriveMounted") ) | spath input=properties.AdditionalFields | fields properties.ReportId, properties.DeviceId, properties.InitiatingProcessAccountDomain, properties.InitiatingProcessAccountName, properties.InitiatingProcessAccountUpn, properties.FileName, properties.FolderPath, properties.SHA256, properties.Timestamp, properties.SensitivityLabel, properties.IsAzureInfoProtectionApplied properties.DeviceName, DriveLetter, properties.Timestamp, ProductName, SerialNumber, Manufacturer | rename properties.ReportId as ReportId, properties.DeviceId as DeviceId, properties.InitiatingProcessAccountDomain as InitiatingProcessAccountDomain, properties.InitiatingProcessAccountName as InitiatingProcessAccountName, properties.InitiatingProcessAccountUpn as InitiatingProcessAccountUpn, properties.FileName as FileName, properties.FolderPath as FolderPath, properties.SHA256 as SHA256, properties.Timestamp as Timestamp, properties.SensitivityLabel as SensitivityLabel, properties.IsAzureInfoProtectionApplied as IsAzureInfoProtectionApplied, properties.DeviceName as DeviceName, properties.Timestamp as MountTime | eval Timestamp_epoch = strptime (Timestamp, "%Y-%m-%dT%H:%M:%S.%6N%Z") | eval MountTime_epoch = strptime (MountTime, "%Y-%m-%dT%H:%M:%S.%6N%Z") | sort Timestamp MountTime | stats list(FolderPath) as FolderPath, list(DriveLetter) as DriveLetter, list(MountTime) as MountTime, list(MountTime_epoch) as MountTime_epoch, list(Timestamp) as Timestamp, list(Timestamp_epoch) as Timestamp_epoch by DeviceId Not sure what the real question/difficulty is.  The above really performs the same task as your join, just more efficient with stats.
Hi @shrinathkumbhar , find a Splunk Partners and see ES with him/her in a PoC. Otherwise, ask to your Splunk reference People to enable a trial, but it's difficoult, the easiest way is a Splunk Par... See more...
Hi @shrinathkumbhar , find a Splunk Partners and see ES with him/her in a PoC. Otherwise, ask to your Splunk reference People to enable a trial, but it's difficoult, the easiest way is a Splunk Partner, also because it's difficoult to see ES without an initial configuration that requires at least a little knowledge of this solution. If you want to see features, you can search in the free courses (https://education.splunk.com/Saba/Web_spf/NA10P2PRD105/guest/trqledetail/cours000000000003591?_gl=1*2tb7kt*_ga*MzU3MjIzOTU1LjE3MDA4MDg0NTc.*_ga_GS7YF8S63Y*MTcwNzIwNTAwNC4zMjcuMS4xNzA3MjA1MjYzLjYwLjAuMA..*_ga_5EPM2P39FV*MTcwNzIwNDk5Mi4zMzcuMS4xNzA3MjA1Mjg5LjAuMC4w&_ga=2.78551124.1756370686.1704629137-357223955.1700808457#/guest/trqledetail/cours000000000003591 ) a forty minute free course on ES. In addition, you can many videos in theYouTube Splunk Channel https://www.youtube.com/@Splunkofficial Ciao. Giuseppe Ciao. Giuseppe
Thanks, this is what i was hoping for. So the manual is in this case only for safety i suppose. Already tested it on a development environment and then questioned myself why taking the peers down. ... See more...
Thanks, this is what i was hoping for. So the manual is in this case only for safety i suppose. Already tested it on a development environment and then questioned myself why taking the peers down. Will continue testing.   greetz Jari
When I go to SearchHead to edit, it tells me this message (You do not have permissions to edit this configuration)
Should we assume that DB connect queries are independently performed on two days?  In other words, there is no DB connect query to tell you which names appeared yesterday and which today? In this ca... See more...
Should we assume that DB connect queries are independently performed on two days?  In other words, there is no DB connect query to tell you which names appeared yesterday and which today? In this case, you will need to save your output from yesterday for today's use.  If you don't want to offend time travel authorities, this practically means you need to save your output from today for tomorrow's use.  Something like   | inputlookup yesterday.csv ``` assume you did outputlookup yesterday ``` | rename name AS yesterday | appendcols [dbxquery connection="myDBconnect" query="select name from myDB" | outputlookup yesterday.csv ``` save for use tommorrow ``` | rename name AS today ] | where isnull(yesterday)   Here, I use inputlookup and outputlookup (or inputcsv/outputcsv) as example.  If you prefer, you can set up a separate table to store yesterday and use dbxquery/dbxoutput.  Hope this helps.
How to download and install a trial version of Splunk SOAR and MITRE Framework?
Hello, I hope below step is helpful for you. Configuring SSL for Splunk Management Port (mgmt port) on port 8089 involves a few steps.  1. Generate SSL Certificates: Use a tool like OpenSSL to ge... See more...
Hello, I hope below step is helpful for you. Configuring SSL for Splunk Management Port (mgmt port) on port 8089 involves a few steps.  1. Generate SSL Certificates: Use a tool like OpenSSL to generate SSL certificates (private key, public key, and certificate signing request). ```bash openssl req -new -newkey rsa:2048 -keyout splunk.key -out splunk.csr ``` 2. Get the Certificate Signed: Submit the `splunk.csr` to a Certificate Authority (CA) to obtain the signed SSL certificate. Once received, you should have the SSL certificate and CA's intermediate certificate. 3. Create SSL Cert Files: Combine the private key, signed certificate, and CA intermediate certificate into a single PEM file: ```bash cat splunk.key splunk.crt ca_intermediate.crt > splunk.pem ``` 4. Copy Certificates to Splunk Directory: Move the `splunk.pem` file to the `$SPLUNK_HOME/etc/auth` directory. ```bash cp splunk.pem $SPLUNK_HOME/etc/auth ``` 5. Configure Splunk Web: Edit the `web.conf` file in `$SPLUNK_HOME/etc/system/local`: ```ini [settings] enableSplunkWebSSL = true privKeyPath = $SPLUNK_HOME/etc/auth/splunk.pem serverCert = $SPLUNK_HOME/etc/auth/splunk.pem ``` 6. Restart Splunk: Restart Splunk to apply the changes: ```bash $SPLUNK_HOME/bin/splunk restart ``` Ensure Splunk starts without errors. 7. Access Splunk via HTTPS: After the restart, you should be able to access the Splunk Management Port via HTTPS using the URL: ```text https://your-splunk-server:8089 ``` Make sure to replace `your-splunk-server` with the actual server hostname or IP. Remember to keep backups of any configuration files before making changes and consult Splunk's official documentation for the specific version you are using, as configurations may vary.
If you are not a partner and you wanted to get ES then what is way?