All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

[ANSWERED by to4kawa] props.conf should be [yoursourcetype] DATETIME_CONFIG = CURRENT SHOULD_LINEMERGE = false LINE_BREAKER = .*($) I have a Catch 22 issue. I want three things to happen. ... See more...
[ANSWERED by to4kawa] props.conf should be [yoursourcetype] DATETIME_CONFIG = CURRENT SHOULD_LINEMERGE = false LINE_BREAKER = .*($) I have a Catch 22 issue. I want three things to happen. I want to monitor a log file and to combine all lines into a single event until the file is not updated for 2 seconds or more. I want to disable all time extraction, I want all the time set to current (ie _indextime). Being able to handle arbitrary data without any separators. Imagine my file is empty and then within a microsecond these three lines are added to my file. TEST1 Fri Apr 6 20:05:59 EDT 2020 TEST2 Fri Apr 3 20:04:30 EDT 2020 TEST3 Fri Apr 1 20:05:59 EDT 2020 I would like them all to be combined into a single event. And I want the timestamp to be set to current. If I just just start monitoring the file without any props it combines the events into a single one just fine. BUT it will try to parse the timestamps and they will be all over the place. If I mod props.conf and set DATETIME_CONFIG=CURRENT it will set the time to the current one but then split the events into single lines. So I am in a catch 22, I can have one or the other. Any ideas what inputs/props/transforms combo I can have that ignores all time stamps, and combines the events into a single one no matter what. I honestly want something that checks if file has not been modified for 2 seconds and then combine everything new that was added into ONE event. Thanks!
Hello! I am trying to search for multiple malware domains in our logs. I cant figure out how to add multiple domains in my search. Example: "Bad Domains:" go9ogle.com 265online.com bofa2.co... See more...
Hello! I am trying to search for multiple malware domains in our logs. I cant figure out how to add multiple domains in my search. Example: "Bad Domains:" go9ogle.com 265online.com bofa2.com How could I search all of the above domains at the same time?
hello! This is probably a simple answer that I'm not understanding. Running the query below will add a column at the very end called "success_rate". I don't want this since, since I've transposed... See more...
hello! This is probably a simple answer that I'm not understanding. Running the query below will add a column at the very end called "success_rate". I don't want this since, since I've transposed that field to the first row. Seems like the eval from line 4 is still trying to calculate...? How do I get rid of it? | field - success_rate doesn't work index=wsi_tax_summary sourcetype=stash partnerId=* error_msg_service=* tax_year=2019 capability=* intuit_tid=* capability=* | eval error_msg_service = case(match(error_msg_service, "OK"), "Success", 1==1, "Fail") | timechart span=1w dc(intuit_tid) by error_msg_service | fillnull | eval total=Fail+Success, success_rate=round(((Success/total)*100),2) | fieldformat success_rate=tostring('success_rate')+"%" | fields _time, total, Success, Fail, success_rate | eval _time=strftime(_time,"%m-%d-%Y") | transpose column_name="Week Starting" header_field=_time | regex "Week Starting"!=("^_") | fields - success_rate
Hello, I have DB connect configure to where I can select table and run sql query, simple like: SELECT * FROM db1.table1 I see the fields listed on "Choose Column" for Timestamp, I also increased... See more...
Hello, I have DB connect configure to where I can select table and run sql query, simple like: SELECT * FROM db1.table1 I see the fields listed on "Choose Column" for Timestamp, I also increased the Query Timeout. But no data or table shows up below the SQL Editor and hoovered over the running bar, it says 20% and stops there. When I clicked Next, it says: "One or more fields are invalid, please fix them before go next". What am I doing wrong here? Thank you in advance.
My expectations are that whenever I run My search: | fields <> | lookup virustotal_url_cache vt_urls AS url OUTPUT vt_positives, vt_classification, vt_threat_id | virustotal url=url rescan=fal... See more...
My expectations are that whenever I run My search: | fields <> | lookup virustotal_url_cache vt_urls AS url OUTPUT vt_positives, vt_classification, vt_threat_id | virustotal url=url rescan=false | table <> Whatever isn't cached it will hit the API, if it has been searched, it will return the results, and cache it in the KVStore. This hasn't been happening. Also, nothing has been cached to begin with. I ran a test on 8.8.8.8 and nothing returns. I am running Splunk Cloud.
Hi, I recently moved all the alerts to a single owner dedicated to scheduled search. My Monitoring console shows 59% of my scheduled searches have got skipped in last 24hrs. The reason is : "Th... See more...
Hi, I recently moved all the alerts to a single owner dedicated to scheduled search. My Monitoring console shows 59% of my scheduled searches have got skipped in last 24hrs. The reason is : "The maximum number of concurrent running jobs for this historical scheduled search on this cluster has been reached" How can I resolve this ?
Hi Folks, The incidents triggered in Splunk enterprise security are not getting replicated , i checked splunkd.log getting below Error 04-03-2020 23:46:17.490 +0530 INFO SHCSlave - event=SHPSl... See more...
Hi Folks, The incidents triggered in Splunk enterprise security are not getting replicated , i checked splunkd.log getting below Error 04-03-2020 23:46:17.490 +0530 INFO SHCSlave - event=SHPSlave::handleReplicationError aid=scheduler_nobody_U3BsdW5rX1NBX0NJTQ_RMD59cba5de3e5a67614_at_1585928100_2995_62EADBA1-5790-4632-BD11-A0EF9E4C4FBC src=965BB163-3807-4A31-9837-DB64A209B7CA tgt=62EADBA1-5790-4632-BD11-A0EF9E4C4FBC failing=965BB163-3807-4A31-9837-DB64A209B7CA queued replication error job ALso i have tried resynching SH cluster members and did rolling-restart but no luck. Also i saw the shcluster status on members and captain are fluctuating. Sometimes pending and getting up automatically. Pleae suggest
Hi @gaurav_maniar @larmesto I have enabled the modular input for getting data from the Elasticsearch into Splunk and it is working and getting the data in. But I am not sure how the interval... See more...
Hi @gaurav_maniar @larmesto I have enabled the modular input for getting data from the Elasticsearch into Splunk and it is working and getting the data in. But I am not sure how the interval is suppose to work. It is pulling in the same data again and again with everytime the input runs in the interval. Is it supposed to work that way ? Thanks, Nawaz.
Hello, I'm trying to prepare a silent install of Splunk Universal Forwader, but i'm having difficulty finding the option that unchecks the 'Use this UniversalForwarder with on-premises splunk ente... See more...
Hello, I'm trying to prepare a silent install of Splunk Universal Forwader, but i'm having difficulty finding the option that unchecks the 'Use this UniversalForwarder with on-premises splunk enterprise. Uncheck if you want the UniversalForwarder to contact a Splunk Cloud Instance" that is on the main page. What flag will uncheck this? Thank you
Hi, Running into this error trying to setup the Streaming API: 04-03-2020 11:37:21.473 +0000 INFO  TcpOutputProc - Connected to idx=3.225.177.214:9997, pset=0, reuse=0. 04-03-2020 11:37:34.438... See more...
Hi, Running into this error trying to setup the Streaming API: 04-03-2020 11:37:21.473 +0000 INFO  TcpOutputProc - Connected to idx=3.225.177.214:9997, pset=0, reuse=0. 04-03-2020 11:37:34.438 +0000 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python /opt/splunk/bin/runScript.py execute':  Traceback (most recent call last): 04-03-2020 11:37:34.438 +0000 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python /opt/splunk/bin/runScript.py execute':    File "/opt/splunk/bin/runScript.py", line 78, in <module> 04-03-2020 11:37:34.438 +0000 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python /opt/splunk/bin/runScript.py execute':      execfile(REAL_SCRIPT_NAME) 04-03-2020 11:37:34.438 +0000 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python /opt/splunk/bin/runScript.py execute':    File "/opt/splunk/etc/apps/TA-crowdstrike/bin/ta_crowdstrike_rh_falcon_host_accounts.py", line 136, in <module> 04-03-2020 11:37:34.438 +0000 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python /opt/splunk/bin/runScript.py execute':      admin.init(base.ResourceHandler(Servers), admin.CONTEXT_APP_AND_USER) 04-03-2020 11:37:34.438 +0000 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python /opt/splunk/bin/runScript.py execute':    File "/opt/splunk/lib/python2.7/site-packages/splunk/admin.py", line 130, in init 04-03-2020 11:37:34.439 +0000 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python /opt/splunk/bin/runScript.py execute':      hand.execute(info) 04-03-2020 11:37:34.439 +0000 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python /opt/splunk/bin/runScript.py execute':    File "/opt/splunk/lib/python2.7/site-packages/splunk/admin.py", line 593, in execute 04-03-2020 11:37:34.439 +0000 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python /opt/splunk/bin/runScript.py execute':      if self.requestedAction == ACTION_CREATE:   self.handleCreate(confInfo) 04-03-2020 11:37:34.439 +0000 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python /opt/splunk/bin/runScript.py execute':    File "/opt/splunk/etc/apps/TA-crowdstrike/bin/ta_crowdstrike/splunktaucclib/rest_handler/base.py", line 253, in handleCreate 04-03-2020 11:37:34.439 +0000 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python /opt/splunk/bin/runScript.py execute':      args = self.encode(self.callerArgs.data) 04-03-2020 11:37:34.439 +0000 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python /opt/splunk/bin/runScript.py execute':    File "/opt/splunk/etc/apps/TA-crowdstrike/bin/ta_crowdstrike/splunktaucclib/rest_handler/base.py", line 299, in encode 04-03-2020 11:37:34.439 +0000 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python /opt/splunk/bin/runScript.py execute':      args = self.validate(args) 04-03-2020 11:37:34.439 +0000 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python /opt/splunk/bin/runScript.py execute':    File "/opt/splunk/etc/apps/TA-crowdstrike/bin/ta_crowdstrike/splunktaucclib/rest_handler/base.py", line 659, in validate 04-03-2020 11:37:34.439 +0000 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python /opt/splunk/bin/runScript.py execute':      logLevel=logging.INFO) 04-03-2020 11:37:34.439 +0000 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python /opt/splunk/bin/runScript.py execute':    File "/opt/splunk/etc/apps/TA-crowdstrike/bin/ta_crowdstrike/splunktaucclib/rest_handler/error_ctl.py", line 150, in ctl 04-03-2020 11:37:34.439 +0000 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python /opt/splunk/bin/runScript.py execute':      raise BaseException(err) 04-03-2020 11:37:34.439 +0000 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python /opt/splunk/bin/runScript.py execute':  BaseException: REST ERROR[1100]: Unsupported value in request arguments - Authorization Failed! Please verify API UUID and API Key of Streaming API - field=api_key 04-03-2020 11:37:34.450 +0000 ERROR AdminManagerExternal - External handler failed with code '1' and output: 'REST ERROR[1100]: Unsupported value in request arguments - Authorization Failed! Please verify API UUID and API Key of Streaming API - field=api_key'.  See splunkd.log for stderr output. 04-03-2020 11:37:40.640 +0000 WARN  TcpOutputProc - Cooked connection to ip=52.22.200.180:9997 timed out 04-03-2020 11:37:51.207 +0000 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python /opt/splunk/bin/runScript.py execute':  Traceback (most recent call last): 04-03-2020 11:37:51.207 +0000 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python /opt/splunk/bin/runScript.py execute':    File "/opt/splunk/bin/runScript.py", line 78, in <module> 04-03-2020 11:37:51.207 +0000 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python /opt/splunk/bin/runScript.py execute':      execfile(REAL_SCRIPT_NAME) 04-03-2020 11:37:51.207 +0000 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python /opt/splunk/bin/runScript.py execute':    File "/opt/splunk/etc/apps/TA-crowdstrike/bin/ta_crowdstrike_rh_falcon_host_accounts.py", line 136, in <module> 04-03-2020 11:37:51.207 +0000 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python /opt/splunk/bin/runScript.py execute':      admin.init(base.ResourceHandler(Servers), admin.CONTEXT_APP_AND_USER) 04-03-2020 11:37:51.207 +0000 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python /opt/splunk/bin/runScript.py execute':    File "/opt/splunk/lib/python2.7/site-packages/splunk/admin.py", line 130, in init 04-03-2020 11:37:51.207 +0000 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python /opt/splunk/bin/runScript.py execute':      hand.execute(info) 04-03-2020 11:37:51.207 +0000 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python /opt/splunk/bin/runScript.py execute':    File "/opt/splunk/lib/python2.7/site-packages/splunk/admin.py", line 593, in execute 04-03-2020 11:37:51.207 +0000 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python /opt/splunk/bin/runScript.py execute':      if self.requestedAction == ACTION_CREATE:   self.handleCreate(confInfo) 04-03-2020 11:37:51.207 +0000 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python /opt/splunk/bin/runScript.py execute':    File "/opt/splunk/etc/apps/TA-crowdstrike/bin/ta_crowdstrike/splunktaucclib/rest_handler/base.py", line 253, in handleCreate 04-03-2020 11:37:51.207 +0000 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python /opt/splunk/bin/runScript.py execute':      args = self.encode(self.callerArgs.data) 04-03-2020 11:37:51.207 +0000 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python /opt/splunk/bin/runScript.py execute':    File "/opt/splunk/etc/apps/TA-crowdstrike/bin/ta_crowdstrike/splunktaucclib/rest_handler/base.py", line 299, in encode 04-03-2020 11:37:51.207 +0000 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python /opt/splunk/bin/runScript.py execute':      args = self.validate(args) 04-03-2020 11:37:51.208 +0000 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python /opt/splunk/bin/runScript.py execute':    File "/opt/splunk/etc/apps/TA-crowdstrike/bin/ta_crowdstrike/splunktaucclib/rest_handler/base.py", line 659, in validate 04-03-2020 11:37:51.208 +0000 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python /opt/splunk/bin/runScript.py execute':      logLevel=logging.INFO) 04-03-2020 11:37:51.208 +0000 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python /opt/splunk/bin/runScript.py execute':    File "/opt/splunk/etc/apps/TA-crowdstrike/bin/ta_crowdstrike/splunktaucclib/rest_handler/error_ctl.py", line 150, in ctl 04-03-2020 11:37:51.208 +0000 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python /opt/splunk/bin/runScript.py execute':      raise BaseException(err) 04-03-2020 11:37:51.208 +0000 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python /opt/splunk/bin/runScript.py execute':  BaseException: REST ERROR[1100]: Unsupported value in request arguments - Authorization Failed! Please verify Username and Password of Query API - field=api_key 04-03-2020 11:37:51.219 +0000 ERROR AdminManagerExternal - External handler failed with code '1' and output: 'REST ERROR[1100]: Unsupported value in request arguments - Authorization Failed! Please verify Username and Password of Query API - field=api_key'.  See splunkd.log for stderr output. Any ideas would be welcome. Cheers W
As always I know you will be able to answer my question. So using this query: index=_nix_xxxx sourcetype=df host=abdhw003 OR host=n OR host=n OR host=n or host=n MountedOn="/doc" | eval TotalGB... See more...
As always I know you will be able to answer my question. So using this query: index=_nix_xxxx sourcetype=df host=abdhw003 OR host=n OR host=n OR host=n or host=n MountedOn="/doc" | eval TotalGBytes= TotalMBytes/1024 | eval UsedGBytes=UsedMbytes/1024 |eval used_pct=100(UsedGBytes/TotalGBytes) | stats max(TotalGBytes) as "MaxSize(GB) max(UsedGBytes) as "UsedSize(GB) as "percentUsed" by host, MountedOn | search PercentUsed>05| Sort PercentUsed I am able to see the space used by each server, is there a way wherein the dashboard once any server hits 80% or 90% used- the color of that server changes to red and an email is triggered to the support team that a certain server has reached 90% capacity? Is that a query or something to be parameterized in the dashboard itself? Trying to understand Splunk, I appreciate all the help. Thanks, Mike
Hello i'm trying to remove the data i have in kvstore collection. im using this command : splunk clean kvstore -app system -collection alerts_prod im getting message that there is not... See more...
Hello i'm trying to remove the data i have in kvstore collection. im using this command : splunk clean kvstore -app system -collection alerts_prod im getting message that there is nothing to remove even though i see that there is $clusterTime.clusterTime.$timestamp.i $clusterTime.clusterTime.$timestamp.t $clusterTime.signature.hash.$binary $clusterTime.signature.hash.$type $clusterTime.signature.keyId App Collection author avgObjSize capped count data dbsize eai:acl.app eai:acl.can_list eai:acl.can_write eai:acl.modifiable eai:acl.owner eai:acl.perms.read eai:acl.perms.write eai:acl.removable eai:acl.sharing id indexSizes.UserAndKeyUniqueIndex indexSizes._id indexsize lastExtentSize nindexes ns numExtents ok operationTime.$timestamp.i operationTime.$timestamp.t paddingFactor paddingFactorNote published size splunk_server storageSize title totalIndexSize updated userFlags 1 1585936837 AAAAAAAAAAAAAAAAAAAAAAAAAAA= 00 0 system alerts_prod system 266059 false 180 {"ns":"system.alerts_prod","size":47890624,"count":180,"avgObjSize":266059,"numExtents":8,"storageSize":61513728,"lastExtentSize":33554432,"paddingFactor":1,"paddingFactorNote":"paddingFactor is unused and unmaintained in 3.0. It remains hard coded to 1.0 for compatibility only.","userFlags":1,"capped":false,"nindexes":2,"indexDetails":{},"totalIndexSize":32704,"indexSizes":{"id":8176,"_UserAndKeyUniqueIndex":24528},"ok":1,"operationTime":{"$timestamp":{"t":1585936837,"i":1}},"$clusterTime":{"clusterTime":{"$timestamp":{"t":1585936837,"i":1}},"signature":{"hash":{"$binary":"AAAAAAAAAAAAAAAAAAAAAAAAAAA=","$type":"00"},"keyId":0}}} what am i missing ? thanks
Hi All , Need your assistance , i am trying to setup an alert but when i save the alert i get sever error at top . I have setup alerts in past not able to figure out the issue , i tried restarting ... See more...
Hi All , Need your assistance , i am trying to setup an alert but when i save the alert i get sever error at top . I have setup alerts in past not able to figure out the issue , i tried restarting the search head too but no luck. To add more context , the search is for setting up an alarm for disk space and currently the search which i am setting an alert on is not resulting any value as the size has not meet yet . Thanks & Regards , Deepak
Hi, I am dealing with a situation here. Trying to join 2 queries to find out the peak hour volume in last 90 days on a particular page. The data needs to come from two queries because of the use of... See more...
Hi, I am dealing with a situation here. Trying to join 2 queries to find out the peak hour volume in last 90 days on a particular page. The data needs to come from two queries because of the use of referer in the sub-search. limits.conf can't be modified because there are so many records and due to performance. So Is there any alternate way or if someone can help me with another alternate query, that will be greatly appreciated. index=test sourcetype="access_combined_wcookie" req_content="/checkout/yourdetails" status=200 | join uniqueId max=0 [ search index=test sourcetype="access_combined_wcookie" req_content="/reviewbasket" referer="https://www.site.com/content/site/homePage.html*"] | timechart span=1h count | sort - count @manjunathmeti @somesoni2 @to4kawa @woodcock - Will you guys be able to help as you helped me previously? Thanks very much in advance
I am following snort + splunk installation tutorial - find it here Everything regarding splunk went as expected(installed and was able to login ), except the part where i have to add plugin. The p... See more...
I am following snort + splunk installation tutorial - find it here Everything regarding splunk went as expected(installed and was able to login ), except the part where i have to add plugin. The problem is that after asking for splunk.com credentials to install the particular plugin mentioned in tutorial, it returns that I have entered wrong credentials , but they work just fine cuz I can login in splunk.com I am using a VM on google cloud. Could it be due to closed ports ? Thanks
Hello everyone, I having issues using Splunk to read and extract fields from this JSON file. I would appreciate any help. json data { "uid" : "a82ee257", "name" : "Throughput Utilizat... See more...
Hello everyone, I having issues using Splunk to read and extract fields from this JSON file. I would appreciate any help. json data { "uid" : "a82ee257", "name" : "Throughput Utilization", "axisXType" : "DateTime", "elementReports" : [ { "element" : { "id" : "001", "name" : "NS-001", "type" : "NetworkSegment" }, "series" : [ { "uid" : "3242d4e4", "instance" : "0", "name" : "Utilization", "data" : [ { "x" : 1551051000000, "y" : 0.0 }, { "x" : 1551051300000, "y" : 3.1 }, { "x" : 1551136800000, "y" : 7.4 }, { "x" : 1551137100000, "y" : 1.6 } ], "e" : 1 } ] }, { "element" : { "id" : "002", "name" : "NS-002", "type" : "NetworkSegment" }, "series" : [ { "uid" : "4654d4e4", "instance" : "0", "name" : "Utilization", "data" : [ { "x" : 1551051000000, "y" : 0.3 }, { "x" : 1551051300000, "y" : 0.0 }, { "x" : 1551051600000, "y" : 0.0 }, { "x" : 1551137100000, "y" : 2.12 } ], "e" : 1 } ] }, { "element" : { "id" : "003", "name" : "NS-003", "type" : "NetworkSegment" }, "series" : [ { "uid" : "2481d4e6", "instance" : "0", "name" : "Utilization", "data" : [ { "x" : 1551051000000, "y" : 0.0 }, { "x" : 1551051300000, "y" : 0.0 }, { "x" : 1551051900000, "y" : 0.0 }, { "x" : 1551136800000, "y" : 0.0 } ], "e" : 1 } ] }, { "element" : { "id" : "004", "name" : "NS-004", "type" : "NetworkSegment" }, "series" : [ ] } ] } Here is my setting: [json_sample] TRUNCATE = 0 SHOULD_LINEMERGE=false LINE_BREAKER = (,*){\s+"element" Here is what I am expecting: element.id,element.name,element.type,element.series.uid,element.series.instance,element.series.name,data.x,data.y,,,, 001,NS-001,NetworkSegment,3242d4e4,0,Utilization,1551051000000,0.0,,,, 001,NS-001,NetworkSegment,3242d4e4,0,Utilization,1551051300000,3.1,,,, 001,NS-001,NetworkSegment,3242d4e4,0,Utilization,1551136800000,7.4,,,, 001,NS-001,NetworkSegment,3242d4e4,0,Utilization,1551137100000,1.6,,,, 002,NS-002,NetworkSegment,4654d4e4,0,Utilization,1551051000000,0.3,,,, 002,NS-002,NetworkSegment,4654d4e4,0,Utilization,1551051300000,0.0,,,, 002,NS-002,NetworkSegment,4654d4e4,0,Utilization,1551136800000,0.0,,,, 002,NS-002,NetworkSegment,4654d4e4,0,Utilization,1551137100000,2.12,,,, 003,NS-003,NetworkSegment,2481d4e6,0,Utilization,1551051000000,,,,, 003,NS-003,NetworkSegment,2481d4e6,0,Utilization,1551051300000,,,,, 003,NS-003,NetworkSegment,2481d4e6,0,Utilization,1551136800000,,,,, 003,NS-003,NetworkSegment,2481d4e6,0,Utilization,1551137100000,,,,,
We are on Splunk Enterprise 8.0.2 (just updated from v6 - problem occured there too). Dashboard and manually "Export PDF" prints correct results, but "Schedule PDF Delivery" prints wrong empty data... See more...
We are on Splunk Enterprise 8.0.2 (just updated from v6 - problem occured there too). Dashboard and manually "Export PDF" prints correct results, but "Schedule PDF Delivery" prints wrong empty data. But converting the exact same search phrase into a report and in the dashboard, instead, references to the report and prints correct data scheduled! this fails scheduled: <search><query>index=$idx$ ExecEdScAbverkaeufe::true | stats count</query><earliest>@d-3d</earliest><latest>@d-2d</latest></search> (ExecEdScAbverkaeufe is a custom field) but this is working scheduled: <search ref="depot_status_DESADVEd"></search>
We are attempting to write a report querying multiple indexes, which creates a table using data from each. Our challenge is... When we add indextime or _time the report shows all the indextime for ea... See more...
We are attempting to write a report querying multiple indexes, which creates a table using data from each. Our challenge is... When we add indextime or _time the report shows all the indextime for each system in the date range selected. Instead, we want to show only the latest indextime/_time in the report only for each system. (index=* sourcetype=ActiveDirectory objectCategory="CN=Computer,CN=Schema,CN=Configuration,DC=foo,DC=com") OR (index=windows DisplayName="BitLocker Drive Encryption Service" source="kiwi syslog server") | eval indextime=strftime(_indextime,"%Y-%m-%d %H:%M:%S") | eval cn=coalesce(cn,host) | stats values(*) AS * BY cn | search cn=* host=* NOT [inputlookup All_Virtual_Machines.csv | rename Name as cn] | where StartMode!="" AND operatingSystem!="" AND Started!="true" | rename cn as System, operatingSystem as OS | dedup System | table System StartMode State Started OS indextime | sort System
Splunk has all of those threat intel lists for file, process, registry, ip, url, etc... And each list has a description field where I put the threat campaign info related to the IOC. I am wanting ... See more...
Splunk has all of those threat intel lists for file, process, registry, ip, url, etc... And each list has a description field where I put the threat campaign info related to the IOC. I am wanting to extract that description into a new Threat_Activity.description (in the Threat Intelligence Data Model) field when it finds a match in the event logs. I have tried several tactics on my own altering the various Threat Gen searches but with no success. I know I can do searches with joins for workarounds and such. I also know that if I enter that info in the upload name it will show up in the Threat collection or Threat key field. But we often get a huge threatlist with several different campaigns and I would like to upload them all at the same time. This seems like a simple ask since this field is in every built in threat lookup. How can I get it to extract into a new field at match time?
Hi at all, I'm finding problems extracting fields from a json log using spath, I cannot use regexes because I have to use these fields in the Zimperium App Datamodel. I already extracted json, bu... See more...
Hi at all, I'm finding problems extracting fields from a json log using spath, I cannot use regexes because I have to use these fields in the Zimperium App Datamodel. I already extracted json, but I don't know why I'm finding problems. This is a sample: <14>1 04 02 2020 17:02:22 UTC zconsole-xxxxxxxxxx-xxx44 {"system_token": "company-uat", "severity": 1, "event_id": "xxxxxxxx-xxx-xxxx-xxxx-xxxxxxxxxxxx", "forensics": {"zdid": "xxxxxxxx-xxx-xxxx-xxxx-xxxxxxxxxxxx", "event_id": "xxxxxxxx-xxx-xxxx-xxxx-xxxxxxxxxxxx", "os": 1, "attack_time": {"$date": 1585846942000}, "general": [{"name": "Threat Type", "val": "DORMANT"}, {"name": "Action Triggered", "val": ""}], "threat_uuid": "xxxxxxxx-xxx-xxxx-xxxx-xxxxxxxxxxxx", "type": 100}, "mitigated": false, "location": null, "eventtimestamp": "04 02 2020 17:02:22 UTC", "user_info": {"employee_name": "User03 Test", "user_id": "xxxxxxxx-xxx-xxxx-xxxx-xxxxxxxxxxxx", "user_role": "End User", "user_email": "test.user03@company.com", "user_group": "__MTD_UAT"}, "device_info": {"tag1": "", "device_time": "03 30 2020 17:01:31 UTC", "app_version": "10.5.1.0.52R", "zdid": "xxxxxxxx-xxx-xxxx-xxxx-xxxxxxxxxxxx", "tag2": "", "os": "Android", "app": "MobileIron", "jailbroken": false, "operator": null, "os_version": "9", "mdm_id": "xxxxxxxx-xxx-xxxx-xxxx-xxxxxxxxxxxx", "imei": "xxxxxxxx-xxx-xxxx-xxxx-xxxxxxxxxxxx", "model": "SM-A530F", "device_id": "xxxxxxxx-xxx-xxxx-xxxx-xxxxxxxxxxxx", "type": "jackpotltexx", "zapp_instance_id": "xxxxxxxx-xxx-xxxx-xxxx-xxxxxxxxxxxx"}, "threat": {"story": "Inactive Device", "name": "Inactive Device", "general": {"action_triggered": "", "threat_type": "DORMANT"}}} if I use the spath command I have an additional field called "14" containing all the event. Problems started from the ingestion, because this log isn't recognized al json the guided ingestion. Can anyone give me an idea how to do this? Thank you in advance. Ciao. Giuseppe