All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, fellow splunkers!   What I am trying to do is to detect a failed login attempts followed by root password change in linux with correlation search or datamodel search?
Hi everybody,  let's say I'm monitoring the file test.log that has these informations: 2022-22-25 14:00 - row 1 2022-22-25 14:00 - row 2 2022-22-25 14:03 - row 3 2022-22-25 14:05 - row 4   At ... See more...
Hi everybody,  let's say I'm monitoring the file test.log that has these informations: 2022-22-25 14:00 - row 1 2022-22-25 14:00 - row 2 2022-22-25 14:03 - row 3 2022-22-25 14:05 - row 4   At some point, I overwrite the original file with another test.log with these lines 2022-22-25 14:00 - row 1 2022-22-25 14:00 - row 2 2022-22-25 14:03 - row 3 2022-22-25 14:05 - row 4 2022-22-25 17:10 - row 5 2022-22-25 17:10 - row 6   Currently, all the lines of the new test.log are ingested so I have some duplicates. Is there a way to only index the last to rows?
Hi everyone,   I want to join 3 sources from the same inidex. The Problem is, that with join i lose Date because im over 50.000 results in the subsearch. So i try to get my table over the "normal... See more...
Hi everyone,   I want to join 3 sources from the same inidex. The Problem is, that with join i lose Date because im over 50.000 results in the subsearch. So i try to get my table over the "normal" search.   Logic is like the picture: The source "NAS" is a reported fault on a specific Production-number (PRODNR). it includes the Productionnumber, the timestamp of the detection and a clear ID (SNSM - for every fault) with the Partcode of the fault part.  The "NAU" is the data of the processed/closed defect. Problem here is as you can see that the columns in the sources have the same names.    The MP is the number of the process Step. so every source contains the PRODNR.  The NAS and NAU contain the SNSM IDs.  So i want to join the NAU ans NAS by the "SNSM" IDs and see if they alsready passed the Progress step 6 and if a fault was proccessed before the step 6 or if it was open the time the Production Number passed the Step 6.  my search that works is as shown. But its limited to the 50.000 results.  i try to to make it with index=pfps-k sourcetype=NAS OR sourcetype=NAU OR sourcetype=MP.  I get all the data but i cant do the same like the join so compare the SNSM IDs and then the Productionstep index=pfps-k sourcetype=NAS ( PRODNR="1*" OR PRODNR="2*" ) |where 'SPERRE' like ("PZM51%") |dedup PRODNR,PRUEFUNG |join type=left max=0 left=NAS right=NAU where NAS.SNSM=NAU.SNSM [search index=pfps-k sourcetype=NAU ( PRODNR="1*" OR PRODNR="2*" ) |dedup SNSM] |join type=left max=0 left=L right=MP where L.NAS.PRODNR=MP.PRODNR [search index=pfps-k sourcetype="MP" earliest=@d+6h |where MELDEPUNKT=6.0 |where like(PRODNR,"1%") OR like(PRODNR,"2%")]
Hi, I have to add some custom action option in NEAP for ticket creation of upcoming notable events. I have APIs and script ready for ticket creation, just want to call those APIs from Splunk UI thr... See more...
Hi, I have to add some custom action option in NEAP for ticket creation of upcoming notable events. I have APIs and script ready for ticket creation, just want to call those APIs from Splunk UI through NEAP option. Please refer the below screenshot, where I want to add one more action. What exactly we need to do in the backend to add one new action in NEAP. Your responses will be appreciated.
Hi at all, I'm configuring Enterprise Security but I found an unattended issue: I'm trying to use the Maps feature associated to a Source in "Incident Review" dashboard. In details: I have some ... See more...
Hi at all, I'm configuring Enterprise Security but I found an unattended issue: I'm trying to use the Maps feature associated to a Source in "Incident Review" dashboard. In details: I have some Notebles, much of them contain an IP external to the customer and I'd like to visualize the geographic origin of this IP, using the Maps feature associated to the Additional Fields contained in Notable details, but when I click on the mouse right button and I choose the "map <IP address> option, it opens Google Maps but always at the same coordinates that aren't the ones I'm searching. Must I configure something to have this feature or did someone else experience the same issue? Thank you for your attention. Ciao. Giuseppe
Hello, I use Splunk as Indexer and deployment server und I have one universal forwarder installed.  I'm getting an error, when Splunk forwarder tries to read one log file:  Ignoring file '/mnt/scn... See more...
Hello, I use Splunk as Indexer and deployment server und I have one universal forwarder installed.  I'm getting an error, when Splunk forwarder tries to read one log file:  Ignoring file '/mnt/scn_data/log.txt' due to: binary It works, after I put a file props.conf into app folder on the forwarder. [cx_scan_logs] CHARSET = UTF-16LE LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true category = Custom But after I made changes on the index server, the files on the forwarder like inputs.conf are updated and  props.conf is deleted. And I get the error again.  How can I say the Splunk not to delete the props.conf on the forwarder?
I'm a Splunk PS consultant and have been assisting a client with upgrades and migration to SVA compliant architecture (C1). All well and fully operational on 9.0.2 and the client is happy with this i... See more...
I'm a Splunk PS consultant and have been assisting a client with upgrades and migration to SVA compliant architecture (C1). All well and fully operational on 9.0.2 and the client is happy with this improved and fully compliant deployment. Following up from the works we reviewed what sensible security hardening could be implemented across the deployment and we agreed that the pass4SymmKey for the clustering stanza could be longer and more complex. We followed the docs and went to each instances' $SPLUNK_HOME/etc/system/local/server.conf and updated the key in plain text. We restarted the Splunkd daemon via Systemd on all instances and checked the infra. All functional and the cluster remains operating properly, ingesting data, clustering operations correct. However... there is one flaw and that is the MC. That is no longer able to properly query the cluster, it has the DS on it as well and that is properly working and serving apps to clients. It has all the search parameters correct and all nodes listed and was functional immediately before rotation. Yes, I checked btool for the values on disk and decrypted it, all appears fine. After an hour of troubleshooting and checking spkunkd.log there was still no clue but we thought perhaps we had gone too complex on the string with special characters. Rinse and repeat updating all cluster nodes pass4SymmKey to something less complex without special chars. Still failed to operate properly and we spent another hour very carefully reviewing every stanza in operation and consistency. We then decided to try and setup an MC on another node to compare, same exact issue and all checks just come back as greyed out. Time pushing on we decided to revert to the original pass4SymmKey and restart daemon, guess what, still not working. We moved onto other pressing matters but I do not want to leave my client without an answer or approach medium term. Potential for a bug? niche operation rotating pass4SymmKey?
Hi at all, I tried to customize the Incident Review Dashboard to display some additional fields as user, src or dest, as described in the Enterprise Security Admin course. At first I found that t... See more...
Hi at all, I tried to customize the Incident Review Dashboard to display some additional fields as user, src or dest, as described in the Enterprise Security Admin course. At first I found that to have these fields in the Additional Fields, I must add them also to the main dashboard columns, otherwise the additional field isn't displayed, and this is already something not documented. But the problem is that this field is displayed only for some Notables and not for all (as i waited), I also found that the src field is present in all the Notables (except risk based notables), instead user and dest (the most important is user that should be always present) sometimes are present and sometimes not. I supposed that the issue was in the Correlation Search that doesn't add this field to the Notable but opening the Notable with the contributing events link the field is always present. Had someone else experienced this issue? Thank you for your attention. Ciao. Giuseppe
Hello splunk lovers! i want help with date field and i want fast. i have field, format example: data_started  01.01.2016 0:00:00 AND i want to take from field date_started  only year, like 2016. ... See more...
Hello splunk lovers! i want help with date field and i want fast. i have field, format example: data_started  01.01.2016 0:00:00 AND i want to take from field date_started  only year, like 2016. please help! 
[user]$ sudo rpm -U --prefix=/opt/splunk splunk-9.0.1-82c987350fde-linux-2.6-x86_64.rpm error: splunk-9.0.1-82c987350fde-linux-2.6-x86_64.rpm: not an rpm package (or package manifest): note: opt/sp... See more...
[user]$ sudo rpm -U --prefix=/opt/splunk splunk-9.0.1-82c987350fde-linux-2.6-x86_64.rpm error: splunk-9.0.1-82c987350fde-linux-2.6-x86_64.rpm: not an rpm package (or package manifest): note: opt/splunk is my splunk binary location of my HF please advise.
Hi , I am facing difference in count between stats and timechart for same search and same filters Stats cmd : Last 24 hours search|bin span=1d _time |stats count by Status|eventstats sum(*) as sum... See more...
Hi , I am facing difference in count between stats and timechart for same search and same filters Stats cmd : Last 24 hours search|bin span=1d _time |stats count by Status|eventstats sum(*) as sum_* |foreach * [eval "Comp %"=round((count/sum_count)*100,2)]|rename count as Count|fields - sum_count comp  7126 error 37 Noncomp 146 NonRep 54 Total 7363 Timechart :  Last 30 days  search|bin span=1d _time |timechart count by Status| addtotals| eval "Comp %"=round((Comp/Total)*100,2) | eval "Error %"=round((Error/Total)*100,2) | eval "Noncomp %"=round((Noncomp/Total)*100,2) | eval "NonRep %"=round((NonRep/Total)*100,2) | fields _time,*% comp  7126 error 36 Noncomp 146 NonRep 53 Total 7361 There is difference in count by 2 between these 2 functions.I am using a macro before the time chart or stats .Please help me with solution or cause of this issue.   
I have a scenario where i want to expand the field and show as individual events. Below is my query, which works fine for smaller intervals of time, but larger intervals its not efficient. index=... See more...
I have a scenario where i want to expand the field and show as individual events. Below is my query, which works fine for smaller intervals of time, but larger intervals its not efficient. index=app_pcf AND cf_app_name="myApp" AND message_type=OUT AND msg.logger=c.m.c.d.MatchesApiDelegateImpl | spath "msg.logMessage.matched_locations{}.locationId" | search "msg.logMessage.numReturnedMatches">0 | mvexpand "msg.logMessage.matched_locations{}.locationId" | fields "msg.logMessage.matched_locations{}.locationId" | rename "msg.logMessage.matched_locations{}.locationId" to LocationId | table LocationId I have a json array called matched_locations which has field locationId. I can have atmost 10 locationIds in a matched_locations I have thousands of events in the duration which will have this matched_locations json array. Below is example of one such event with bunch of matched_locations ########################################################### cf_app_name: myApp cf_org_name: myOrg cf_space_name: mySpace job: diego_cell message_type: OUT msg: { application: myApp correlationid: 0.af277368.1669261134.5eb2322 httpmethod: GET level: INFO logMessage: { apiName: Matches apiStatus: Success clientId: oh_HSuoA6jKe0b75gjOIL32gtt1NsygFiutBdALv5b45fe4b error: NA matched_locations: [ { city: PHOENIX countryCode: USA locationId: bef26c03-dc5d-4f16-a3ff-957beea80482 matchRank: 1 merchantName: BIG D FLOORCOVERING SUPPLIES postalCode: 85009-1716 state: AZ streetAddress: 2802 W VIRGINIA AVE } { city: PHOENIX countryCode: USA locationId: ec9b385d-6283-46f4-8c9e-dbbe41e48fcc matchRank: 2 merchantName: BIG D FLOOR COVERING 4 postalCode: 85009 state: AZ streetAddress: 4110 W WASHINGTON ST STE 100 } { [+] } { [+] } { [+] } { [+] } { [+] } { [+] } { [+] } { [+] } ] numReturnedMatches: 10 } logger: c.m.c.d.MatchesApiDelegateImpl } origin: rep source_instance: 1 source_type: APP/PROC/WEB timestamp: 1669261139716063000 } ########################################################### Can anyone help me with how I can expand this field efficiently? Thank you.
Hi All, I have a hostname stating \\sent134 I need to remove this \\ using regex and it should be like this:  sent134 Actual: \\sent134 Expected should be: sent134 === Please provide ... See more...
Hi All, I have a hostname stating \\sent134 I need to remove this \\ using regex and it should be like this:  sent134 Actual: \\sent134 Expected should be: sent134 === Please provide regex to remove \\ form the hostname fields. Thanks
We are using the Event Hubs modular input from the SPlunk TA for Microsoft Cloud Services. In our system, we have configured many Event Hubs inputs. However, one of those particular inputs has done ... See more...
We are using the Event Hubs modular input from the SPlunk TA for Microsoft Cloud Services. In our system, we have configured many Event Hubs inputs. However, one of those particular inputs has done some very strange things.  Most of the events received from this particular input are processed correctly however some of the events arrive in "batches", inside a "records" array. These batches can contain up to 300 child objects i.e. 300 separate events.  In the inputs.conf, we have one input configured for this Event Hub. The interval is set to 300 secs. max_wait_time and max_batch_time are left as default.  Has anyone else seen this before?   @jconger   
Hi, My datasets are much larger but these represent the crux of my hurdle...     Sourcetype= transaction fields= transaction_id, user, sourcetype= connection fields=x_transaction_id, user,... See more...
Hi, My datasets are much larger but these represent the crux of my hurdle...     Sourcetype= transaction fields= transaction_id, user, sourcetype= connection fields=x_transaction_id, user, action     Now I need to build a SPL which detects huge data sent to ext.domains in single event, for which I have all the required details in transaction sourcetype itself, but the allowed or block action is not there, those are specified under connection sourcetype.. Just need to merge the action details to the transaction sourcetype Tried with join, results are inappropriate.  Can this be done more efficiently with stats?
Hi All, We have configured safe links policy in Microsoft 365. However, we only get logs for blocked URLs not allowed URLs through the add on. Is there something that can be done to pull all URLs ... See more...
Hi All, We have configured safe links policy in Microsoft 365. However, we only get logs for blocked URLs not allowed URLs through the add on. Is there something that can be done to pull all URLs scanned by safelinks? Thanks, Prabs
Hi All, I have encrypted the user field with sha256  index=abc   sourcetype=xyz | eval domain = sha256(User) | table  domain I am able to see encrypted values under domain field Is th... See more...
Hi All, I have encrypted the user field with sha256  index=abc   sourcetype=xyz | eval domain = sha256(User) | table  domain I am able to see encrypted values under domain field Is there a splunk command to decrypt it?
We have alerts routed to Pagerduty from Splunk. We are debugging whether alerts got routed to Pagerduty. Which index should we query for Pagerduty call response code. 
Below is the current out put (raw) - specific field   node0: -------------------------------------------------------------------------- /var/: No such file or directory /var/tmp/: No such file... See more...
Below is the current out put (raw) - specific field   node0: -------------------------------------------------------------------------- /var/: No such file or directory /var/tmp/: No such file or directory /var/: blablablaba.txt node1: -------------------------------------------------------------------------- /var/: No such file or directory /var/tmp/: No such file or directory   what i need help on, is to group Node0 and Node1 as their own group, and only show IF the row below it (after the "/var") if its anything BUT "No such file or directory"   so the output will end up being: NODE0: /var/: blablablaba.txt NODE1:   thanks for the help in advance.
Hi Friends, My current situation is:  I'm monitoring the files from this path:   source="/opt/redprairie/prod/prodwms/les/log/SplunkMonitoring/*" In this path we receive 2 different .zip files. 1... See more...
Hi Friends, My current situation is:  I'm monitoring the files from this path:   source="/opt/redprairie/prod/prodwms/les/log/SplunkMonitoring/*" In this path we receive 2 different .zip files. 1.support-prodwms--<date & time>.zip 2. commandUsage_<date & time>.csv I want to monitor the first file (support-prodwms--<date & time>.zip). In side the zip file we have 15 different files.  1.probes.csv 2. tasks.csv 3.jobs.csv 4.log-files.csv so on...... Exactly I want to monitor only (2. tasks.csv & 3.jobs.csv) files from zip. remaining files I should not monitor.  Currently I'm using in input.conf: [monitor:///opt/redprairie/*/*/les/log/SplunkMonitoring/support-prodwms--*] index = pg_idx_whse_prod_events sourcetype= SPLUNKMONITORINGNEW whitelist = /tasks\.csv$ crcSalt = <string> recursive = true disabled = false _meta = entity_type::NIX service_name::WHSE environment::PROD   Kindly help me friends. I'm struggling for last 2 days on this topic.  Thanks in advance.  @gcusello @richgalloway @splunk