All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi at all, I'm finding problems extracting fields from a json log using spath, I cannot use regexes because I have to use these fields in the Zimperium App Datamodel. I already extracted json, bu... See more...
Hi at all, I'm finding problems extracting fields from a json log using spath, I cannot use regexes because I have to use these fields in the Zimperium App Datamodel. I already extracted json, but I don't know why I'm finding problems. This is a sample: <14>1 04 02 2020 17:02:22 UTC zconsole-xxxxxxxxxx-xxx44 {"system_token": "company-uat", "severity": 1, "event_id": "xxxxxxxx-xxx-xxxx-xxxx-xxxxxxxxxxxx", "forensics": {"zdid": "xxxxxxxx-xxx-xxxx-xxxx-xxxxxxxxxxxx", "event_id": "xxxxxxxx-xxx-xxxx-xxxx-xxxxxxxxxxxx", "os": 1, "attack_time": {"$date": 1585846942000}, "general": [{"name": "Threat Type", "val": "DORMANT"}, {"name": "Action Triggered", "val": ""}], "threat_uuid": "xxxxxxxx-xxx-xxxx-xxxx-xxxxxxxxxxxx", "type": 100}, "mitigated": false, "location": null, "eventtimestamp": "04 02 2020 17:02:22 UTC", "user_info": {"employee_name": "User03 Test", "user_id": "xxxxxxxx-xxx-xxxx-xxxx-xxxxxxxxxxxx", "user_role": "End User", "user_email": "test.user03@company.com", "user_group": "__MTD_UAT"}, "device_info": {"tag1": "", "device_time": "03 30 2020 17:01:31 UTC", "app_version": "10.5.1.0.52R", "zdid": "xxxxxxxx-xxx-xxxx-xxxx-xxxxxxxxxxxx", "tag2": "", "os": "Android", "app": "MobileIron", "jailbroken": false, "operator": null, "os_version": "9", "mdm_id": "xxxxxxxx-xxx-xxxx-xxxx-xxxxxxxxxxxx", "imei": "xxxxxxxx-xxx-xxxx-xxxx-xxxxxxxxxxxx", "model": "SM-A530F", "device_id": "xxxxxxxx-xxx-xxxx-xxxx-xxxxxxxxxxxx", "type": "jackpotltexx", "zapp_instance_id": "xxxxxxxx-xxx-xxxx-xxxx-xxxxxxxxxxxx"}, "threat": {"story": "Inactive Device", "name": "Inactive Device", "general": {"action_triggered": "", "threat_type": "DORMANT"}}} if I use the spath command I have an additional field called "14" containing all the event. Problems started from the ingestion, because this log isn't recognized al json the guided ingestion. Can anyone give me an idea how to do this? Thank you in advance. Ciao. Giuseppe
My index is getting refreshed every 15 mins and new data gets populated every 15 mins. I need to count the events for the last 15 mins for each day in a period of 30 days. Currently, I am doin... See more...
My index is getting refreshed every 15 mins and new data gets populated every 15 mins. I need to count the events for the last 15 mins for each day in a period of 30 days. Currently, I am doing dc(field) for each day but it removes all the duplicates events and my count is not what I want. I want to get the count of the last 30 days for a 15 mins period for each day without using dc.
Splunk UF's are having different versions 6.0.0, 6.3 and 6.5.2 are connecting to Deployment server with 7.2.6 server. All of sudden some of the clients are dropping from Deployment server.
Hi everyone, I have found this search for GlobalProtect on PaloAlto Networks App, The information showed its really usefull, the only problem I have it's. How do I show receive_time or time of the... See more...
Hi everyone, I have found this search for GlobalProtect on PaloAlto Networks App, The information showed its really usefull, the only problem I have it's. How do I show receive_time or time of the log on the Results. | tstats summariesonly=t latest(log.event_id) AS latest_event, values(log.agent_message) AS log.agent_message, values(log.src_ip) AS log.src_ip count FROM datamodel="pan_firewall" WHERE nodename="log.system.globalprotect" """" groupby _time log.event_id log.user when i erase the * _time* field, this colum disappear, and if I try something like values(log.receive_time) it doesn't show any information. I just want to show the time without a groupby cause this groups all the logs to 30 mins time all logs example; 10:00 am - 10:30 am.
I have a splunk instance with ldap configuration. We noticed that huge number of authentications are being done on the LDAP service using the bind dn user. Does splunk authentication refresh the ld... See more...
I have a splunk instance with ldap configuration. We noticed that huge number of authentications are being done on the LDAP service using the bind dn user. Does splunk authentication refresh the ldap strategies automatically every while? What could the reason behind the big number of authentications?
Here is the message in splunk and I am trying to extract customer and channel {"line":"2020-04-03T12:24:54.589Z LCS {\"customer\":5,\"channel\":\"sqs\",\"notificationId\":213546} When I ru... See more...
Here is the message in splunk and I am trying to extract customer and channel {"line":"2020-04-03T12:24:54.589Z LCS {\"customer\":5,\"channel\":\"sqs\",\"notificationId\":213546} When I run something like this index=docker "Exception" | rex "CustomerID: (?<customer>\S+)," | rex "channelName\\\\\":\\\\\"(?<channel>\w+)" | stats count(notificationId) by CustomerID I am able to see the CustomerID extracted but when I do index=docker "Exception" | rex "CustomerID: (?<customer>\S+)," | rex "channelName\\\\\":\\\\\"(?<channel>\w+)" | stats count(notificationId) by CustomerID, channelName It is not displaying any results which tells me I am not extracting the channelName correctly. How can I fix this ?
Some Extrahop detection events are not being parsed correctly because the default LEEF parser specified in transforms.conf that comes with the Extrahop add-on for splunk is looking for very specific ... See more...
Some Extrahop detection events are not being parsed correctly because the default LEEF parser specified in transforms.conf that comes with the Extrahop add-on for splunk is looking for very specific fields in the events. Some extrahop detection events do not contain all of the key-value pairs the LEEF parser is expecting, therefore not all events are parsing correctly. Default LEEF Parser that comes with add-on REGEX = \|appliance_id=(?P<appliance_id>[a-f0-9]+)¦categories=(?P<categories>.*?)¦det_id=(?P<id>\d+)¦det_url=(?P<detection_url>.*?)¦update_time=(?P<update_time>\w{3} \d{1,2} \d{4} \d{2}:\d{2}:\d{2} \+0000)¦end_time=(?P<end_time>\w{3} \d{1,2} \d{4} \d{2}:\d{2}:\d{2} \+0000)?¦risk_score=(?P<risk_score>\d+)¦start_time=(?P<start_time>\w{3} \d{1,2} \d{4} \d{2}:\d{2}:\d{2} \+0000)¦title=(?P<title>.*?)¦offender_ip=(?P<offender_ip>.*?)¦victim_ip=(?P<victim_ip>.*?)¦offender_id=(?P<offender_id>.*?)¦victim_id=(?P<victim_id>.*?)¦desc=(?P<description>.*?)$ Example event where the default LEEF parser will not work due to missing key-value pairs Mar 31 12:13:32 10.1.9.11 LEEF:2.0|ExtraHop|Reveal(x)|7.8|extrahop-detection|xa6|appliance_id=<applianceID>¦categories=sec,sec.caution¦det_id=55834¦det_url=https://<IP address>/extrahop/#/detections/detail/55834¦update_time=Mar 31 2020 12:13:30 +0000¦risk_score=60¦start_time=Mar 31 2020 12:09:59 +0000¦title=Daily Summary: Inbound Suspicious Connections¦victim_ip=<victim_IP>¦victim_id=<victim_id>¦desc=Over the past day, servers received connections from devices with suspicious IP addresses. These IP addresses are considered suspicious based on threat intelligence found in your Reveal(x) system. Investigate to determine if the IP addresses are from malicious endpoints.
I'm using rangemap (mapped with field colors respectively) in chloropeth maps to sort the legend accordingly. However, is there a way to remove the numbering in front of each range and still have the... See more...
I'm using rangemap (mapped with field colors respectively) in chloropeth maps to sort the legend accordingly. However, is there a way to remove the numbering in front of each range and still have the order in the way I want it ? Once I remove the numbers, the order runs. I tried pre-pending spaces in front but it doesn't work properly. <panel> <title>Monthly Volume Trend (%)</title> <map> <search base="First_Base_Search"> <query>search $OriginCtryCode$ | search $OriginRegion$ | search $DstCtryCode$ | search $DestRegion$ | stats sum(eval(if(_time>=relative_time(now(),"-31d"),SHP_VOL,0))) as Latest30Days,sum(eval(if(_time>=relative_time(now(),"-61d") AND _time<relative_time(now(),"-31d"),SHP_VOL,0))) as Prev30Days by DEST_COUNTRY_CODE | eval PercentageDiff=round((Latest30Days-Prev30Days)/Prev30Days*100) | table DEST_COUNTRY_CODE,Latest30Days,Prev30Days,PercentageDiff | rename DEST_COUNTRY_CODE as iso2 | lookup geo_attr_countries iso2 OUTPUT country | where !isnull(country) | rangemap field=PercentageDiff "1. Over 50%"=50.01-1000 "2. 20% to 50%"=20.01-50 "3. 10% to 20%"=10.01-20 "4. 0% - 10%"=0.01-10 "5. 0% to -10%"=-9.99-0 "6. -10% to -20%"=-19.99--10 "7. -20% to -50%"=-49.99--20 "8. Over -50%"=-1000--50 | fields+ country, range | sort range | geom geo_countries featureIdField="country"</query> </search> <option name="drilldown">none</option> <!--option name="mapping.choroplethLayer.colorBins">8</option--> <option name="mapping.fieldColors">{"1. Over 50%":0x39A800,"2. 20% to 50%":0x64B73F,"3. 10% to 20%":0xA6D48C,"4. 0% - 10%":0xC4E3B2,"5. 0% to -10%":"0xFF9E81","6. -10% to -20%":0xFF7B5A,"7. -20% to -50%":0xFF5232,"8. Over -50%":0xFF0000,"None":0xFFFFFF}</option> <!--option name="mapping.choroplethLayer.colorMode">categorical</option> <option name="mapping.choroplethLayer.maximumColor">0x53a051</option> <option name="mapping.choroplethLayer.minimumColor">0xdc4e41</option--> <option name="mapping.choroplethLayer.shapeOpacity">0.7</option> <option name="mapping.map.zoom">2</option> <option name="mapping.showTiles">1</option> <option name="mapping.tileLayer.tileOpacity">0.7</option> <option name="mapping.type">choropleth</option> </map> </panel> To further illustrate, I would like the legend to be sorted in this exact order when the numbering is removed:-
I have a field serv_time = 44432 in miliseconds. and the default field _time. I want to be able to subtract _time - serv_time (_time minus - serv-time) and get the result in a human rea... See more...
I have a field serv_time = 44432 in miliseconds. and the default field _time. I want to be able to subtract _time - serv_time (_time minus - serv-time) and get the result in a human readable format ?
Hi all, My team recently got metric data into Splunk and I created several dashboards with various drop down tokens for metric names as well as host. My next step was to try and create a logical ... See more...
Hi all, My team recently got metric data into Splunk and I created several dashboards with various drop down tokens for metric names as well as host. My next step was to try and create a logical base search with post processing searches to reduce the amount of concurrent searches running within the panels. I've been having a heck of a time getting a proper base search to work when it struck me: These metric searches on the panels almost always complete in 1 second and there aren't too many metric points being generated per day. Are base searches even worth it for this type of data? I know that base searches is a best practice for dashboards, but these panels still load almost instantly even when concurrent dashboards are being run. Trying to get some potential insight before I go down rabbit holes again to get a base search and post processing searches to work. Appreciate any feedback
I tried: index=_nix_xxxx sourcetype=df host=abdhw003 MountedOn="/doc" |eval source="/doc*" and that seems to show the data of the /doc folder Now I have multiple servers and I need stats on all ... See more...
I tried: index=_nix_xxxx sourcetype=df host=abdhw003 MountedOn="/doc" |eval source="/doc*" and that seems to show the data of the /doc folder Now I have multiple servers and I need stats on all the servers seperately whichever server has >5% used. Please see example below. But when i run the below command then All the servers are adding up ad its showing me a 1 liner with all server info merged. I think I messed up in the stats algorithm. Please help. index=_nix_xxxx sourcetype=df host=abdhw003 OR host=n OR host=n OR host=n or host=n MountedOn="/doc"| eval TotalGBytes= TotalMBytes/1024 | eval UsedGBytes=UsedMbytes/1024 |eval used_pct=100(UsedGBytes/TotalGBytes) | stats max(TotalGBytes) as "MaxSize(GB) max(UsedGBytes) as "UsedSize(GB) as "percentUsed" by MountedOn | search PercentUsed>05| Sort PercentUsed Now the stats that I am getting is getting totalled(All 5 servers adding each other and showing me a max value) I think as the stats query has max value, How do I show stats of each server at a time? Any Ideas? Thanks for the help. I appreciate it
Hi, I am trying to filter input and output with : 2020-03-31 09:57:11,714 9.5.1455: ERROR syslog156: operation failed for (28, 325). Status codes: 'blablabla' host="main" source="main.... See more...
Hi, I am trying to filter input and output with : 2020-03-31 09:57:11,714 9.5.1455: ERROR syslog156: operation failed for (28, 325). Status codes: 'blablabla' host="main" source="main.log" ERROR syslog* | rex "(?=[\(](?<input>\d+)[,])" | rex "(?=[, ](?<output>\d+)[\).])" and later find id with (on log line input is separated from output with 'to') 2020-03-31 09:57:11,020 9.5.1455: INFO syslog890: Should connect/disconnect 28 to 325 for 1.1.8.4.58.1 with operation absolute host="main" source="main.log" INFO syslog* input "to" output | rex "(?:\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\.)(?<id>(\d{1,3}))" | table _time id the complete query looks like that: host="main" source="main.log" INFO syslog* input "to" output | rex "(?:\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\.)(?<id>(\d{1,3}))" | table _time id [search host="main" source="main.log" ERROR syslog* | rex "(?=[\(](?<input>\d+)[,])" | rex "(?=[, ](?<output>\d+)[\).])"] 2020-03-31 09:57:11,020 9.5.1455: INFO syslog890: Should connect/disconnect 28 to 325 for 1.1.8.4.58.1 with operation absolute 2020-03-31 09:57:11,714 9.5.1455: ERROR syslog156: operation failed for (28, 325). Status codes: 'blablabla'
Im wondering if someone can assist, the KV Store has gone down on our searchheads since deploying a new app yesterday. I have checked for the mongod.lock and also tried a --repair but neither of thes... See more...
Im wondering if someone can assist, the KV Store has gone down on our searchheads since deploying a new app yesterday. I have checked for the mongod.lock and also tried a --repair but neither of these seem to work
Hi, I want to index this csv file: run_time;field1;field2;field3;field4;field5;field6;field7;field8; 80s;"iField_value1":18;"iField_value2":524;"iField_value3":2004;"iField_value4":2;"iField_v... See more...
Hi, I want to index this csv file: run_time;field1;field2;field3;field4;field5;field6;field7;field8; 80s;"iField_value1":18;"iField_value2":524;"iField_value3":2004;"iField_value4":2;"iField_value5":19;"iField_value6":500;"iField_value7":2004;"iField_value8":21; inputs.conf is [monitor://\\srv\Log Sicherung\] disabled = false index = machinedata_w48 sourcetype = systest crcSalt=<SOURCE> ignoreOlderThan = 24h and props.conf is [systest] BREAK_ONLY_BEFORE_DATE = CHARSET = WINDOWS-1252 INDEXED_EXTRACTIONS = csv KV_MODE = none LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true SHOULD_LINEMERGE = false category = Structured description = Comma-separated value format. Set header and other settings in "Delimited Settings" disabled = false pulldown_type = 1 In splunk will be indexed only run_time;field1 and the field1 with the value: "iField_value1":18;"iField_value2":524;"iField_value3":2004;"iField_value4":2;"iField_value5":19;"iField_value6":500;"iField_value7":2004;"iField_value8":21; Can you help me, please, how to solve this problem? I want to have indexed all fields* with proper values (separated with ;).
Hello everyone I have following problem: I have set disabled flag in ip_intel by following query: | inputlookup ip_intel where _key="js.arcgis.com" | eval disabled="1" | outputlookup appen... See more...
Hello everyone I have following problem: I have set disabled flag in ip_intel by following query: | inputlookup ip_intel where _key="js.arcgis.com" | eval disabled="1" | outputlookup append=true ip_intel After some time I discovered that disabled field value disappeared. My question how I can monitor when and why value isn't anymore in its place. I thought about using internal indexes.
Simple question: How do I, from Splunk web using the Slunk Processing Language, calculate the disk space utilized by a summary index? This isn't using license so that method only works for regular in... See more...
Simple question: How do I, from Splunk web using the Slunk Processing Language, calculate the disk space utilized by a summary index? This isn't using license so that method only works for regular indexes, not summary indexes. Thanks!
I have a few scheduled reports created in my app and they are not triggering on the time scheduled and there is a delay in receiving , Could you help?.
Hello Community! I have created a Dashboard with a dbxlookup command in the search. As an admin, i don't have problems with the Dashboard but users who wants to view it, get the above error mess... See more...
Hello Community! I have created a Dashboard with a dbxlookup command in the search. As an admin, i don't have problems with the Dashboard but users who wants to view it, get the above error message I have tried to set the security for the database and the identity but without success. Do you have a clue why this happens? Thanks Rob
HI, I have dashboard with 3 panels,when I click on panels it will scroll down and navigate to the details of that specific panel. ex : 1) cpu 2) memory 3) DB when I click on cpu... See more...
HI, I have dashboard with 3 panels,when I click on panels it will scroll down and navigate to the details of that specific panel. ex : 1) cpu 2) memory 3) DB when I click on cpu it is taking me to cpu details in the same page,similarly when I click on memory it will take me to memory details. I want reject memory and DB details when I click on cpu and display only cpu details. I am not passing tokens here,i am using ID. below is the sample code of ID : <a class="pagelink_button nav" **id**="**CPU**"> any xml code or html code on how reject Other ID's and depend on only one ID? OR any xml code or any other way to implement this requirement? Thanks
Do you have some settings to use a proxy or planning to do so ? maybe something like htttp://squidserverip:8080/rdap.arin.net/registry/ip/x.x.x.x