All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Im am doing a lookup in a customers Splunk cloud - better to say, I am using Splunk Addon for ASA and there are two lookups for action field. However my problem ist that in this environment something... See more...
Im am doing a lookup in a customers Splunk cloud - better to say, I am using Splunk Addon for ASA and there are two lookups for action field. However my problem ist that in this environment something overwrites/cleans the action field after the lookup. The lookup inserts the action field as vendor_action and outputs the action field as Cisco_ASA_action and as action. Cisco_ASA_action field is existing after lookup. Action field is missing after lookup (but surely was existing before). If I output the field as action2, everything is working fine. If I output the filed as action, field is missing. Does anybody have a clue what is happening here? Even if the lookup fails, the action field should be existing. I know that the issue is not with the ASA addon, as the lookup works fine on other Search Heads. Something ist cleaning/overwriting the action field. Any suggestions? As far as I know, lookup is the last thing happening, so I cannot explain, what is going wrong. There are also no other lookups from other apps which might cause this behaviour.
Hi, I have a list with terminated users with "Last name", "First name" and their email. I am trying to set up a query that would compare my list "| inputlookup terminated_account" with all logins in... See more...
Hi, I have a list with terminated users with "Last name", "First name" and their email. I am trying to set up a query that would compare my list "| inputlookup terminated_account" with all logins in the platform and create an alert system, that every time it finds match it sends me an alert. Is it possible, if yes, could you help me? Thank you
Hello,   the response time is quite long sometimes but the microservice itself responds very quickly (it just returns some data without any additional requests to other endpoints). can it be the p... See more...
Hello,   the response time is quite long sometimes but the microservice itself responds very quickly (it just returns some data without any additional requests to other endpoints). can it be the problem somewhere in cloudfront / nginx / loadbalancer? Can please someone tell me what is the reason for this ?    As you can see from the picture , from the Micro-services directly the response time around 3ms , but from Nginx it's around 18 sec.      
I've checked a number of threads about breaking JSON files and I've tried a number of offered solutions and none seem to work. I'm running 8.1.0 and I don't remember seeing this as much of an issue ... See more...
I've checked a number of threads about breaking JSON files and I've tried a number of offered solutions and none seem to work. I'm running 8.1.0 and I don't remember seeing this as much of an issue in previous versions. The snort (ids-u2json) JSON is lint-valid as follows:   {"type": "event", "event": {"msg": "ET INFO Microsoft Connection Test", "classification": "Potentially Bad Traffic", "sensor-id": 0, "event-id": 581, "event-second": 1607588446, "event-microsecond": 790456, "signature-id": 2031071, "generator-id": 1, "signature-revision": 2, "classification-id": 3, "priority": 2, "sport-itype": 63591, "dport-icode": 80, "protocol": 6, "impact-flag": 0, "impact": 0, "blocked": 0, "mpls-label": null, "vlan-id": null, "pad2": null, "source-ip": "192.168.1.125", "destination-ip": "13.107.4.52"}} {"type": "event", "event": {"msg": "ET POLICY PE EXE or DLL Windows file download HTTP", "classification": "Potential Corporate Privacy Violation", "sensor-id": 0, "event-id": 582, "event-second": 1607588467, "event-microsecond": 769440, "signature-id": 2018959, "generator-id": 1, "signature-revision": 4, "classification-id": 33, "priority": 1, "sport-itype": 80, "dport-icode": 63676, "protocol": 6, "impact-flag": 0, "impact": 0, "blocked": 0, "mpls-label": null, "vlan-id": null, "pad2": null, "source-ip": "205.185.216.10", "destination-ip": "192.168.1.125"}} {"type": "event", "event": {"msg": "ET INFO Packed Executable Download", "classification": "Misc activity", "sensor-id": 0, "event-id": 583, "event-second": 1607588467, "event-microsecond": 769340, "signature-id": 2014819, "generator-id": 1, "signature-revision": 1, "classification-id": 29, "priority": 3, "sport-itype": 80, "dport-icode": 63676, "protocol": 6, "impact-flag": 0, "impact": 0, "blocked": 0, "mpls-label": null, "vlan-id": null, "pad2": null, "source-ip": "205.185.216.10", "destination-ip": "192.168.1.125"}}   props.conf on the UF is as follows:   [sourcetype=json] KV_MODE=json AUTO_KV_JSON=true NO_BINARY_CHECK = true disabled = false SHOULD_LINEMERGE = false TIME_FORMAT = "event-second": %s, "event-microsecond": %6N LINE_BREAKER = }}(^s)    and props.conf on the indexer/search head as follows:   [stanza] TZ = UTC SHOULD_LINEMERGE = false [_json] DATETIME_CONFIG = LINE_BREAKER = }} NO_BINARY_CHECK = true disabled = false KV_MODE = json [json_no_timestamp] NO_BINARY_CHECK = true SHOULD_LINEMERGE = false disabled = false   According to what I've told the UF to do in props.conf, the JSON events should be splitting up the JSON events using the double braces LINE_BREAKER }} as follows:   {"type": "event", "event": {"msg": "ET INFO Microsoft Connection Test", "classification": "Potentially Bad Traffic", "sensor-id": 0, "event-id": 581, "event-second": 1607588446, "event-microsecond": 790456, "signature-id": 2031071, "generator-id": 1, "signature-revision": 2, "classification-id": 3, "priority": 2, "sport-itype": 63591, "dport-icode": 80, "protocol": 6, "impact-flag": 0, "impact": 0, "blocked": 0, "mpls-label": null, "vlan-id": null, "pad2": null, "source-ip": "192.168.1.125", "destination-ip": "13.107.4.52"}} {"type": "event", "event": {"msg": "ET POLICY PE EXE or DLL Windows file download HTTP", "classification": "Potential Corporate Privacy Violation", "sensor-id": 0, "event-id": 582, "event-second": 1607588467, "event-microsecond": 769440, "signature-id": 2018959, "generator-id": 1, "signature-revision": 4, "classification-id": 33, "priority": 1, "sport-itype": 80, "dport-icode": 63676, "protocol": 6, "impact-flag": 0, "impact": 0, "blocked": 0, "mpls-label": null, "vlan-id": null, "pad2": null, "source-ip": "205.185.216.10", "destination-ip": "192.168.1.125"}} {"type": "event", "event": {"msg": "ET INFO Packed Executable Download", "classification": "Misc activity", "sensor-id": 0, "event-id": 583, "event-second": 1607588467, "event-microsecond": 769340, "signature-id": 2014819, "generator-id": 1, "signature-revision": 1, "classification-id": 29, "priority": 3, "sport-itype": 80, "dport-icode": 63676, "protocol": 6, "impact-flag": 0, "impact": 0, "blocked": 0, "mpls-label": null, "vlan-id": null, "pad2": null, "source-ip": "205.185.216.10", "destination-ip": "192.168.1.125"}}   but it doesn't. Instead, the UF clumps them together as a single event and only reports on the first JSON stanza. Nothing I've tried for LINE_BREAKER seems to work - the UF seems to ignore it. Many thanks
Hello, I didn't find the correct way to search specific events between specific hours. I want to find since the last 7 day between 10:50 PM and 01:30 AM Thank's
Hi,  Img-1 Img-2 What I want is- In Img1, when I click on Sucess/Failure/Total_Transaction column value, I want a change with respect to Microservices transaction in img 2, so for... See more...
Hi,  Img-1 Img-2 What I want is- In Img1, when I click on Sucess/Failure/Total_Transaction column value, I want a change with respect to Microservices transaction in img 2, so for example if I click on LostStolen Services for Failure column which has 35 as value, In the next table I want it to print all those 35 transaction ids with status. NOTE: Here for Success & failure, I have used different flags to count.  Could not figure out to use same flag for 2 different column to count. 
Hi, i have urgent question. Can i rename security ID of PC without renaming on local PC? I renamed pc after instalation of splunk forwarder but Security ID in logs still shows old  name. How can i... See more...
Hi, i have urgent question. Can i rename security ID of PC without renaming on local PC? I renamed pc after instalation of splunk forwarder but Security ID in logs still shows old  name. How can i fix it without having to renaming it in local INPUTS.conf SERVER.conf.  Thx for fast response
Hi, We have a Splunk Windows universal forwarder which is not reporting all the metrics as configured for the Splunk Add-on for Microsoft Windows. All other windows universal forwarders report cpu, ... See more...
Hi, We have a Splunk Windows universal forwarder which is not reporting all the metrics as configured for the Splunk Add-on for Microsoft Windows. All other windows universal forwarders report cpu, processor information,disk and memory. The one with the issue is only reporting processor information. Splunk Add-on for Microsoft Windows gets pushed by deployment server so all configs are the same. Any advice please?    
Hi, I have checked the official doc for volume metrics in server visibility but unable to comprehend it clearly, so in other words here is my requirement: - We want to monitor all volumes like /tmp... See more...
Hi, I have checked the official doc for volume metrics in server visibility but unable to comprehend it clearly, so in other words here is my requirement: - We want to monitor all volumes like /tmp, /var, /var/log -Monitoring should be dynamic, which means if tomorrow any mount point is removed/added appd should automatically start that monitoring -Should we create individual conditions for every volume or use the metric: Hardware Resources|Volumes|Used(%) ?
I am trying to extract multiple key value pairs from data like this:   Image |Loading |\path\to\obfuscated\\CT_384.dcm ------------------------------------------------------------------------------... See more...
I am trying to extract multiple key value pairs from data like this:   Image |Loading |\path\to\obfuscated\\CT_384.dcm ------------------------------------------------------------------------------------------------ Image |Photometric Interpretation |MONOCHROME2 | Image |Compression |EXPLICIT_LITTLE_ENDIAN (with metaheader) Series |Creation... | | Image |Image width [mm] |512 | Image |Image height [mm] |512 | Image |Bit-Depth |BLuint16 | Image |Row Vector [mm] |(1/0/0) | Image |Column Vector [mm] |(0/1/0) | Image |Image Position [mm] |(-249.512/-504.512/-226.5) | Image |PixelsizeX [mm] |0.976563 | Image |PixelsizeY [mm] |0.976563 | Image |Img Orientation Info |axial; RL: 0.000°, AP: 0.000°, HF: -90.000°; normal: 0.000°; Orientation supported Image |Horizontal flip |1 | Image |Vertical flip |0 | Image |Brainlab Subsystem |[OK] | Image |InstNumber/BLScanNum |170 | Image |Series number |0 | Image |Instance UID |1.3.12.2.1107.5.1.4.53031.000000000000000000001 Image |Study UID |1.3.12.2.1107.5.1.4.53031.000000000000000000001 Image |Series UID |1.3.12.2.1107.5.1.4.53031.000000000000000000000 Image |BitsAllocated |16 | Image |PatientOrientation |Supine | Image |HeadFeetOrientation |HeadFirst | Image |Modality |CT | Image |SliceThickness [mm] |1 | Image |AcquisitionID |2 | Image |FrameOfRefUID |1.3.12.2.1107.5.1.4.53031.000000000000000000009 Image |ScanDate |10-AUG-2009 | Image |ScanTime |093806.140000 | Image |Manufacturer |SIEMENS | Image |Manufacturer model |Sensation 10 | Image |Institution name |Maastro Clinic | Image |Station name |MEDPC | Image |Software Versions |syngo CT 2006G | Image |Comment |Head^ST_Head1mmCONTRAST (Adult); CT10082009093806:Head 1.0 B31s; ST_Head1mmCONTRAST; CHEST ------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------   So far I have tried the following:   index="test" | rex max_match=0 field=_raw "(?<k1>[^\|\s]+)\s+\|(?<k2>[^\|]+)\|(?<v>[^\|]+)[\|]?[\r\n]" | eval z=mvzip(mvzip(trim(k1), trim(k2), " "), trim(v), "~") | mvexpand z | rex field=z "(?<key>[^~]+)~(?<value>.*)" | eval {key} = value   It yields desired results for key-value pairs, but each individual key value pair is assigned to an individual copy of the event, thus generating a cross product of events and extracted keys. I don't want that. I want all extractions to be tied to the unique event they are extracted from. So I tried this:   index="test" | rex max_match=0 field=_raw "(?<k1>[^\|]+)\|(?<k2>[^\|]+)\|(?<v>[^\|]+)[\|]?[\r\n]+" | eval key=mvzip(trim(k1), trim(k2), " ") | eval {key}=v   Which extracts the key and values properly, but when trying to evaluate the key=value statement it merges all key values into the field name and assigns each individual value to it, so now I have separate values but no unique keys and my key looks like this:   Image Loading Image Photometric Interpretation Image Compression Image Image width [mm] Image Image height [mm] Image Bit-Depth Image Row Vector [mm] Image Column Vector [mm] Image Image Position [mm] Image PixelsizeX [mm] Image PixelsizeY [mm] Image Img Orientation Info Image Horizontal flip Image Vertical flip Image Brainlab Subsystem Image InstNumber/BLScanNum Image Series number Image Instance UID Image Study UID Image Series UID Image BitsAllocated Image PatientOrientation Image HeadFeetOrientation Image Modality Image SliceThickness [mm] Image AcquisitionID Image FrameOfRefUID Image ScanDate Image ScanTime Image Manufacturer Image Manufacturer model Image Institution name Image Station name Image Software Versions Image Comment   How can I get the best of both? evaluated keys from search time extraction value assigned to the proper key all key-value pairs connected to the unique individual event
Hey guys. I'm a beginner of Splunk  I have a one question.  I  get a input value but value has a space. so I want to remove it here's my code   <input type="text"  token="field55">      // fiel... See more...
Hey guys. I'm a beginner of Splunk  I have a one question.  I  get a input value but value has a space. so I want to remove it here's my code   <input type="text"  token="field55">      // field55 is 'temp token' <lable>test </lable> <change> <eval token="field5"> trim($value$)</eval> // field5 is 'real token' </change> </input> ..... <query> index=mail    mail_sender= "$field5$" |  table mail_sender  </query>   if I input like 'test123  '  the result value is 'test123' (no spaces)  how can I do that??  So sorry hard to read please help me out!!
Hello,I have this query.. index="dpsn_students" earliest=0 latest=now suspended=false AND (class= "*" OR class= "* *") | dedup primaryEmail | rename primaryEmail as email | eval class=upper(class... See more...
Hello,I have this query.. index="dpsn_students" earliest=0 latest=now suspended=false AND (class= "*" OR class= "* *") | dedup primaryEmail | rename primaryEmail as email | eval class=upper(class) | join type=outer email [ search index="dpsn_meet" | rex field=date "(?<yy>[^\.]*)\-(?<mm>[^\.]*)\-(?<dd>[\S]*)T(?<hh>[^\.]*)\:(?<min>[^\.]*)\:(?<sec>[^\.]*)\." | eval ndatetime = yy.mm.dd.hh.min.sec | eval _time=strptime(ndatetime,"%Y%m%d%H%M%S") + 19800 | eval Time = strftime(_time, "%Y-%m-%d %H:%M:%S") | eval Duration = duration_seconds/60 | stats sum(Duration) as tot by email] | join type=outer class [ search index="dpsnapitt" AND (class= "*" OR class= "* *") AND day="DAY 1" | stats count as Total by class | eval class_time=Total*30] | fillnull value="0" | where class!="0" | eval m=0.75 | eval p=1 | eval n=class_time | eval o=m*n*p | where tot >= o | stats count as "Total" If I run this query on Monday with the time range of last 31 hours before 2pm, some data is coming but it should be 0 as there is no school on Sunday. At 2pm on Monday, cron job is done but I dont know how to handle it before 2pm. Pleases help.
Hi, I have built a dashboard after testing the query in search. The dashboard is showing "search did not return any events" even though my query is returning results when opened in search. Below is ... See more...
Hi, I have built a dashboard after testing the query in search. The dashboard is showing "search did not return any events" even though my query is returning results when opened in search. Below is the source, <form> <label>IP</label> <fieldset submitButton="false" autoRun="true"> <input type="time" token="time"> <label></label> <default> <earliest>-24h@h</earliest> <latest>now</latest> </default> </input> </fieldset> <row> <panel> <title>sample</title> <event> <title>sample</title> <search> <query>|inputlookup mylookup|search tag="bruteforce"|dedup indicator|table indicator|union[search sourcetype="data" action=allowed (src_ip!=10.0.0.0/8 src_ip!=172.16.0.0/12 src_ip!=192.168.0.0/16) OR (dest_ip!=10.0.0.0/8 dest_ip!=172.16.0.0/12 dest_ip!=192.168.0.0/16)|eval indicator=mvappend(src_ip,dest_ip)|mvexpand indicator|dedup indicator|table indicator]|stats count by indicator|where count&gt;1</query> <earliest>$time.earliest$</earliest> <latest>$time.latest$</latest> </search> <option name="list.drilldown">none</option> <option name="refresh.display">progressbar</option> </event> </panel> </row> </form> Please give me your suggestions.
I want to sum the output that is stored in summary index and display the output in dashboard which shows sum of all counts for one week. Below is the code i am using but the output comes as the prev... See more...
I want to sum the output that is stored in summary index and display the output in dashboard which shows sum of all counts for one week. Below is the code i am using but the output comes as the previous day output stored in summary index: index=*1 search_name="Daily File Transfer Counts" | dedup BASE_FILE_ID |table Date USER_NM BASE_FILE_ID FILE_NM File_Count_By_Day | bin _time as week span=7d | stats sum(File_Count_By_Day) as oneweek by XMIT_AUTH_USER_NM,XMIT_BASE_FILE_ID,XMIT_BASE_FILE_NM | eval week=strftime(_time,"%Y - %U") My code in Daily File Transfer Counts is as below: index=*1 sourcetype=s source="p" "File Catalog" "Completed" | dedup FILE_ID | eval Date=strftime(_time, "%b/%d/%Y ") | stats count(FILE_ID) as "File_Count_By_Day" by Date,XMIT_AUTH_USER_NM,XMIT_BASE_FILE_ID,XMIT_BASE_FILE_NM   I am looking to count all the file that was transfers within a week for particular file(sum of File_Count_By_Day within a week)
Using Microsoft Azure Add-on for Splunk v 3.0.0, have successfully gotten events from AD & Event Hub and now we are now attempting to get Security Center Alerts & Tasks but we're getting the followin... See more...
Using Microsoft Azure Add-on for Splunk v 3.0.0, have successfully gotten events from AD & Event Hub and now we are now attempting to get Security Center Alerts & Tasks but we're getting the following stack trace:     2020-12-09 22:24:55,806 ERROR pid=31658 tid=MainThread file=base_modinput.py:log_error:309 | Get error when collecting events. Traceback (most recent call last): File "/opt/splunk/etc/apps/TA-MS-AAD/bin/ta_ms_aad/aob_py3/modinput_wrapper/base_modinput.py", line 128, in stream_events self.collect_events(ew) File "/opt/splunk/etc/apps/TA-MS-AAD/bin/azure_security_center_input.py", line 88, in collect_events input_module.collect_events(self, ew) File "/opt/splunk/etc/apps/TA-MS-AAD/bin/input_module_azure_security_center_input.py", line 83, in collect_events if (this_changedTime > max_asc_task_date): TypeError: '>' not supported between instances of 'str' and 'NoneType'     Our app has "Reader" permissions in Azure which fixed a 403 error before we got to this point, and its very likely that this error is related to a permission setting somewhere in Azure (potentially similar to this solved question). This error is only happening for Security TASKs, though we are not getting any events for Security Alerts either when one is enabled and the other is disabled.
First up, awesome tool and really useful.  I am new to the tool and learning the intricacies of it, however I have come across a problem that I don't quite understand yet.  I ran the "Run: Trackers R... See more...
First up, awesome tool and really useful.  I am new to the tool and learning the intricacies of it, however I have come across a problem that I don't quite understand yet.  I ran the "Run: Trackers Report" and used the short term tracker.  This populated the information I need for all my data sources and shows the last time, last ingest and last time idx fields which I then matched against a search run in SPL.  However I have noted that while the index is updating (via SPL), TrackMe is not reflecting this in these columns.  For example * Run Tracker --> last time, last ingest and last time idx are showing as 10/12/2020 11:59.  The data_max_allowed is default of 3600 currently Come back later and let the application refresh and checking the index using SPL I can see that the index has updated with a few new events so fr arguments sake the index now has a recent event at 10/12/2020 12:35 * However in the TrackMe application (no manual run of the Trackers reports), the last time, last ingest and last time idx are showing as 10/12/2020 11:59 is not reflective of this.   Do I need to wait for the "max_lag_allowed" to expire before it refreshes these fields?  I assumed this is automated without having to manual run the trackers constantly.  
Is it possible to invoke powershell script in Splunk?
Hello all. I am wondering what’s the best choice for putting the HTTP event collector behind an AWS load balancer. It would seem to me that an application load balancer would be appropriate for HTTP ... See more...
Hello all. I am wondering what’s the best choice for putting the HTTP event collector behind an AWS load balancer. It would seem to me that an application load balancer would be appropriate for HTTP traffic, as opposed to a traditional load balancer. thoughts?  
I have two events: items received, and items acted on. I want to set an alert when the count by transactionID is not equal for the two searches. I have the search set up like so:   index=myIndex so... See more...
I have two events: items received, and items acted on. I want to set an alert when the count by transactionID is not equal for the two searches. I have the search set up like so:   index=myIndex source=mySource | search criteriaForItemsReceived | stats count as itemsReceived by transactionID | append [ search index-myIndex source=mySource | search criteriaForItemsProcessed | stats count as itemsProcessed by transactionID ] | stats values(*) as * by transactionID   ok, it's a little more complicated, but this is the important part. So then need to compare itemsReceived to itemsProcessed to determine if there should be an alert. I have tried   ... stats values(eval(itemsReceived-itemsProcessed)) as Difference ... | search Difference != 0   as well as doing the eval before the stats, but everything I try ends up with null values for the eval, even though the table is properly populated with the values for itemsReceived and itemsProcessed (I have also tried convert num(itemsReceived) and tonumber(itemsReceived,10) in the event Splunk was not recognizing these fields as numbers, but each time, the fields are null). What am I doing wrong here?
Hi, I want to exclude IPs when performing this search, but despite the IPs being present in the lookup they still aren't excluded. I'm not sure what I'm doing wrong in my search. Please advise. And ... See more...
Hi, I want to exclude IPs when performing this search, but despite the IPs being present in the lookup they still aren't excluded. I'm not sure what I'm doing wrong in my search. Please advise. And yes, my lookup table is correct. sourcetype=audit dest!=secure-uat NOT ( [|inputlookup IP_Allow | rename ip as src_ip | fields src_ip | return 10000 $src_ip] ) | timechart span=1h dc(user) by src_ip useother=0 usenull=0 | stats max(*) AS *