All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, I need to read the json data into the Splunk. I use the Webtools |curl for that. However at the moment I do not have access to the data from Splunk, but what I have is the data in the json fo... See more...
Hello, I need to read the json data into the Splunk. I use the Webtools |curl for that. However at the moment I do not have access to the data from Splunk, but what I have is the data in the json form in the file. In the meantime until I get the proper access to the data per |curl, I would like to start already with writing the SPL processing it and for that I would like to read it from the file. How would I do it? How would I read the datafile in json format that I get it in one single field in Splunk, like _raw? From there I could pick it and manipulate further with spath. I do not really want to index the file, I would just like to read it into a single field, that I have the same starting situation as with the curl call. Is it possible? Kind regards, Kamil
I've installed an application in splunk cloud, it asks to restart the splunk. Could you tell me how to 
Hi Guys,   We can see there are 6 hosts which are sending bulk events (logs) to splunk. But we don’t know who is using these host events in spunk. Is there any way we can identify the searches, rep... See more...
Hi Guys,   We can see there are 6 hosts which are sending bulk events (logs) to splunk. But we don’t know who is using these host events in spunk. Is there any way we can identify the searches, reports, alerts or dashboards where these hosts events are being used. The purpose is if no where these logs are being used then we can stop the forwarders from those hosts.
I have some data about email statistics, where one of relevant fields is source IP address. I'm building a dashboard and wanted to add input field on that source IP.  That input field should have thr... See more...
I have some data about email statistics, where one of relevant fields is source IP address. I'm building a dashboard and wanted to add input field on that source IP.  That input field should have three choices: All possible source IPs. That is going to be "*". our own MX addresses. every external IP (i.e., all possible source IPs, except the ones listed in 2) In the case of 1 and 2 I have token and search is going to have expression like "src_ip = X". But I cannot find how to combine it with 3, where I'd have to negate condition, something like "src_ip != MX_IP". Any ideas? Also, at the moment I'm trying to do it via checkbox, but if another type would be more suitable, let me know.
Hi, I would appreciate your help in implementing the following alert with Splunk and the machine-learning toolkit. Let's start with a simple example. Suppose I have one host in my system which send... See more...
Hi, I would appreciate your help in implementing the following alert with Splunk and the machine-learning toolkit. Let's start with a simple example. Suppose I have one host in my system which sends one of two predefined messages. Then, the event should consist of two fields: [_time, message]. I can use the timechart command to generate two new numerical timeseries: count of total events. count of each predefined message. Finally, I can use the machine learning toolkit to detect outliers and anomalies.   Now, I would like to describe my real situation: I have an unknown number of hosts; each host may send any kind of message. A typical event looks like: [_time, host, message]. I would like to implement an outlier alert for each possible host, possible message, and for the total number of messages per host. I prefer to have a single alert for all combinations of host and message_type. In addition, I would like to have a visualization of the timeseries of each combination. Unfortunately, I don't have a clue how to implement this task in SPL.  A python solution may look like the following: find unique hosts. find unique messages. For host in hosts: For msg in messages: Do anomaly detection (host, msg) Do anomaly detection (host, msg_count)                
Hey Splunkers, in the last days I'm trying to learn and understand the principles of LISPY to understand the fllowing phenomenon. By now I can tell that I've learned a lot but still can't comprehend... See more...
Hey Splunkers, in the last days I'm trying to learn and understand the principles of LISPY to understand the fllowing phenomenon. By now I can tell that I've learned a lot but still can't comprehend the behaviour of Splunk putting its LISPY queries  together. Szenario: Our analysts are working with Windows Defender Logs and therefore we are using two TAs (https://splunkbase.splunk.com/app/3734/ and https://splunkbase.splunk.com/app/5208/) to extract and normalize the data. The TA by nextpart does a renaming of the source and a automatic lookup as you can see in this props.conf   [source::...WinEventLog:Microsoft-Windows-Windows Defender/Operational] # Default shorten to easy readable source EVAL-source = "XmlWinEventLog:Defender" LOOKUP-CategoryString_for_windows = windefender_signature_lookup signature_id OUTPUTNEW action, CategoryString, result, subsystem ...    I understand what is happening here so far, but now we have a strange behaviour whie running SPL on the data where we used the fields "index", "source" and "CategoryString". "CategoryString" comes as output fromthe automatic lookup. That was when i took a closer look at the LISPY and was able to locate the problem: SPL LISPY Results index=indexname source="XmlWinEventLog:Defender" CategoryString=action [ AND action index::indexname [ OR source::*wineventlog:microsoft-windows-windows\ defender/operational source::xmlwineventlog:defender ] ] No  index=indexname SourceName="Microsoft-Windows-Windows Defender" CategoryString=action [ AND defender index::indexname microsoft windows [ OR action source::*wineventlog:microsoft-windows-windows\ defender/operational ] ]  Yes As far as I understand the string "action" is not found in tsidx file as it is returned as a field value from a automatic lookup and that's why our first LISPY does not provide any data and the SPL gives back 0 results. In the second SPL where we used SourceName (not indexed) instead of Source (indexed) the LISPY looked different and worked as the string "action" is now in OR-clause and not in the AND-clause at the beginning... My questions now are: As I'm a very curious guy, can someone explain this behaviour to me? What would / could be a good way or workaround to "fix" this so that the field "source" and "CategoryString" can be used together? Regards, Lombs
Hi Team, How to install UF via GPO? Any specific command line  to run the file .msi that use our username and password? Thanks and best regards,
Hi, I have deployed the Splunk TA aws application on a test environment. For authentication I'm using AWS roles. The data seems to be coming in, however I get the following errors in the _internal ... See more...
Hi, I have deployed the Splunk TA aws application on a test environment. For authentication I'm using AWS roles. The data seems to be coming in, however I get the following errors in the _internal index (source is from python.log) ERROR REST ERROR[1020]: Fail to encrypt credential information - {"base_app": "Splunk_TA_aws", "endpoint": "configs/conf-splunk_ta_aws_iam_roles", "handler": "BaseRestHandler", "encrypted_args": ["arn"], "name": "inventory_access", "error": "'Entity' object has no attribute 'update'"} Traceback (most recent call last): File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/3rdparty/python3/splunktalib/rest_manager/cred_mgmt.py", line 43, in encrypt data.update({arg:CredMgmt.ENCRYPTED_MAGIC_TOKEN for arg in data if arg in self._encryptedArgs}) AttributeError: 'Entity' object has no attribute 'update' Please note the password file within the local folder seem to be create ok. Has anyone come across this error before? Is it harmful? Regards,
New to splunk (on cloud). I have a pie chart and am trying to create a drill down that will set a token. But the only option I see is "Link to URL".  As per the documentation https://docs.splunk.com/... See more...
New to splunk (on cloud). I have a pie chart and am trying to create a drill down that will set a token. But the only option I see is "Link to URL".  As per the documentation https://docs.splunk.com/Documentation/SplunkCloud/latest/Viz/DrilldownIntro I should be seeing 4 options. Any idea?
I notice that the Splunk App for Infrastructure support pages now have a header saying that this product is end of life and will cease be to developed beyond August next year (2022): "On August 22, ... See more...
I notice that the Splunk App for Infrastructure support pages now have a header saying that this product is end of life and will cease be to developed beyond August next year (2022): "On August 22, 2022, the Splunk App Infrastructure will reach its end of life and Splunk will no longer maintain or develop this product. " I can't find any other announcements from Splunk about this. Is there a plan to replace it with another product or just to cease developing Splunk in this direction? Thanks Eddie
I'm having certain panel in my dashboard using | pivot command and others using | datamodel command (coz there are certain things you cant do with pivot). I want to allow the user to select the gran... See more...
I'm having certain panel in my dashboard using | pivot command and others using | datamodel command (coz there are certain things you cant do with pivot). I want to allow the user to select the granularity of the time charts in the dashboard when they select a longer time range. So, in the change event handler, I'm trying to convert the span value compatible with the period argument in pivot.   <input type="dropdown" token="span" searchWhenChanged="true"> <label>Granularity</label> <choice value="">Default</choice> <choice value="span=1m">1 min</choice> <choice value="span=15m">15 min</choice> <choice value="span=30m">30 min</choice> <choice value="span=1h">1 hour</choice> <choice value="span=1d">1 day</choice> <default></default> <change> <eval token="PivotPeriodAutoSolo">case("$span$"="span=1m", "1m", "$span$"="span=15m", "15m", "$span$"="span=30m", 30m, "$span$"="span=1h", 1h, "$span$"="span=1d", 1d, 1=1, "auto")</eval> </change> </input>     But for some reason this change does not get applied.
Hi I am running into a problem when it come to subsearches. I want to use results from the first search to plug into the second search. The uid/keyvalue  ties multiple sourcetypes together, with eac... See more...
Hi I am running into a problem when it come to subsearches. I want to use results from the first search to plug into the second search. The uid/keyvalue  ties multiple sourcetypes together, with each sourcetype comes similar and unique information. It would be great to correlate this info by a table or stats.  FYI Both sourctypes have an uid field dns sourcetype has the contains query field conn sourcetype has the other fields wanting display index="main" sourcetype=conn uid=keyvalue  [ search index="main" sourcetype=dns          | rename uid as keyvalue           | table keyvalue] | fields proto, query, id.orig_h | table uid, query,  proto, id.orig_h
I am in process of writing a maintenance plan for my Distributed environment including a Enterprise Security prem. app. Has any one ever written such a plan? Please advise
We have the following data ingested (not json format) , where we are trying to extract  "DeletedImages": 0 and "DeletedImages": 24 value pairs Data: 2021-05-04 - 13:50:41.878 - INFO : Action com... See more...
We have the following data ingested (not json format) , where we are trying to extract  "DeletedImages": 0 and "DeletedImages": 24 value pairs Data: 2021-05-04 - 13:50:41.878 - INFO : Action completed in 0.192996025085 seconds, result is { "images-deleted": 0, "metrics": { "Action": "Ec2DeleteImageAction", "Data": { "DeletedImages": 0 }, "Version": "1.0", "Type": "action", "ActionId": "12345" }, "account": "123456789", "task": "ABCD-EFGE-QAQ-DELETE-IMAGE", "images": 535, "region": "ab-east-1" } - ReconNum:123456678901234 2021-05-04 - 13:55:41.878 - INFO : Action completed in 0.192996025085 seconds, result is { "images-deleted": 0, "metrics": { "Action": "Ec2DeleteImageAction", "Data": { "DeletedImages": 24 }, "Version": "1.0", "Type": "action", "ActionId": "12345" }, "account": "123456788", "task": "ABCD-EFGE-QAQ-DELETE-IMAGE", "images": 536, "region": "ab-east-1" } - ReconNum:123456678901235
Hello guys I was thinking if it was possible to perhaps find the common and uncommon values between n fields after using a multisearch command, I cant seem to find a function in Splunk to yield th... See more...
Hello guys I was thinking if it was possible to perhaps find the common and uncommon values between n fields after using a multisearch command, I cant seem to find a function in Splunk to yield the intersect between values, or is there one? Lets say that my code looks like this:   |multisearch [|search index=BOOK | fields A] [|search index=FLIGHT | fields B] [|search index=HOTEL | fields C]​ A,B and C are IDs from different custumers and I´d like to know what are the common costumers between the three fields and also the costumers that are exclusive to each field (that means that their ID only apprears in either field A, B or C ) -Please dont judge me   I started by using the stats command and do something like | stats values(A) as A values(B) as B and values(C) as C  but since there is no other field to do something like " by clause" I was even able to have the info in a table, any information or documentation is so welcome thank you so much guys kindly,   Cindy
Is there a way to skip hot buckets (local storage) and ingest/index data directly into smartstore (s3 buckets) ?
Hello - I am looking for recommendations on combining 2 searches that use the same Lookup CSV but different columns in the CSV as the link to the lookup.   Thank you all for taking a look and provid... See more...
Hello - I am looking for recommendations on combining 2 searches that use the same Lookup CSV but different columns in the CSV as the link to the lookup.   Thank you all for taking a look and providing insights. CSV lookup Columns Job_Config_Name,Job,Job_Thread_Name,Frequency_mins,Job_Name,Job_Type,Job_Task,Active Search 1 index="idx_cibca_App_prod" sourcetype = "tomcat:runtime:log:jpma" AND "lastUpdatedTS" OR "Time taken for" host=Server_1 OR host=Server_2 OR host=Server_3 OR host=Server_4 OR host=Server_5 OR host=Server_6 OR host=Server_7 OR host=Server_8 | rex "(?<Job_Thread_Name>[a-z].*Range)" | rex "(?<DATE_TIME>^(\d+)-(\d+)-(\d+)(\s+)(\d+):(\d+):(\d+).(\d+))" | stats latest(_time) as _time , latest(host) as host by Job_Thread_Name | lookup App-Job-Index-Lookup.csv Job_Thread_Name OUTPUTNEW | eval Thread_Last_Executed=strftime(_time, "%Y-%m-%d %I:%M:%S %p"), EPOC_Time=(_time) | eval Lag=round((now()-EPOC_Time)/60) | eval Status=if(isnull(Lag), "NOT OK - Job not running", if(Lag<= if(Frequency_mins>60, Frequency_mins+10, 70),"OK","NOT OK - Job not running - Lag found")) | table Job_Name, host, Job_Thread_Name, Frequency_mins,Job_Config_Name, Thread_Last_Executed,Lag,Status,Job_Status,Job_Status_Logged,TIMETAKEN_IN_MINS Search 2 index="idx_cibca_App_prod" sourcetype="tomcat:runtime:log:jpma" AND "Job Details job name:" host=Server_1 OR host=Server_2 OR host=Server_3 OR host=Server_4 OR host=Server_5 OR host=Server_6 OR host=Server_7 OR host=Server_8 | rex "Job Details job name:(?<Job_Config_Name>.*) status:(?<JOB_STATUS>.*) timetaken:(?<TIMETAKEN>.*) minutes" | rex "(?<DATE_TIME>^(\d+)-(\d+)-(\d+)(\s+)(\d+):(\d+):(\d+).(\d+))" | stats latest(DATE_TIME) AS Job_Status_Logged latest(JOB_STATUS) AS Job_Status, latest(TIMETAKEN) AS TIMETAKEN_IN_MINS by Job_Config_Name | lookup App-Job-Index-Lookup.csv Job_Config_Name OUTPUT Job_Name, Frequency_mins, Job_Config_Name, Job_Thread_Name | table Job_Name, Job_Thread_Name, Frequency_mins,Job_Status,Job_Status_Logged,TIMETAKEN_IN_MINS  
I am new to splunk Thank you all with your figured the monitoring issue that I originally asked, now I can not login to SPlunk need to reset admin password how do I do that in centos7?
Hi I have a inherited a KPI that monitors disk space in ITSI, the search works fine and returns a results when the thresholds are breached however the episodes continue even when the server is in ma... See more...
Hi I have a inherited a KPI that monitors disk space in ITSI, the search works fine and returns a results when the thresholds are breached however the episodes continue even when the server is in maintenance mode. I think I know why but don't yet know how to work around it. This is the KPI search: | mstats avg(LogicalDisk.%_Free_Space) as "logicaldisk_free_space" avg(PhysicalDisk.%_Disk_Read_Time) as "physicaldisk_read_time" avg(PhysicalDisk.%_Disk_Write_Time) as "physicaldisk_write_time" avg(Network_Interface.Packets_Received/sec) as "network_packets_received_per_second" avg(Network_Interface.Packets_Sent/sec) as "network_packets_sent_per_second" avg(Network_Interface.Bytes_Received/sec) as "network_bytes_received_per_second" avg(Network_Interface.Bytes_Sent/sec) as "network_bytes_sent_per_second" avg(Network_Interface.Packets_Outbound_Errors) as "network_packets_outbound_errors" WHERE `sai_metrics_indexes` AND instance!=_Total instance!=P: by host,instance span=30s | eval host_dev=host . ":" . instance | eval "physicaldisk_total_time" = physicaldisk_read_time + physicaldisk_write_time | eval "network_packets_total_per_second" = network_packets_received_per_second + network_packets_sent_per_second | eval "network_mbs_total_per_second" = (network_bytes_received_per_second + network_bytes_sent_per_second)/1000000   The Threshold field is logical_free_space The Split by field is host_dev which as you can see combines the host name with the disk device like this HOST1234:C: The data is filtered by service with the host field The result in the service analyser looks good Problem is with the Entity Name now being HOSTNAME:C: when the HOST is put into maintenance this KPI keeps creating episodes. Can someone help me with a practical way to do this and still use maintenance mode successfully? Cheers
Hello - My data looks like (also attached as PNG for better readability): 2021-04-28 - 22:01:14.728 - INFO : Action completed in 7.90478181839 seconds, result is { "images-deleted": 8, "images": 44... See more...
Hello - My data looks like (also attached as PNG for better readability): 2021-04-28 - 22:01:14.728 - INFO : Action completed in 7.90478181839 seconds, result is { "images-deleted": 8, "images": 444, "account": "012345678901", "task": "DELETE-AMI-TASK", "metrics": { "Action": "Ec2DeleteImageAction", "Data": { "DeletedImages": 8 }, "Version": "1.0", "Type": "action", "ActionId": "aac9da60-d325-4ed5-ae30-2e11fe7a7e39" }, "deleted": { "us-east-1": [ "ami-0dfd9eee9557ffcb3", "ami-0fec918b8f4b5bf04", "ami-00b68913ba31e0590", "ami-0859ee921a1ff93d0", "ami-06bdf5c91701957a2", "ami-00945fa203dba66df", "ami-0b35e3e1f90ff9233", "ami-032006127456fba8a" ] }, "region": "us-east-1" } - ReconNum:1619647200000 I want to extract everything between the first { and the last } with rex, cast it as JSON via spath, and then pull out the value of DeletedImages. My search string is | rex "(?<jsonData>{[^}].+})" | spath input=jsonData output=myfield path=metrics.Data.DeletedImages But it doesn't seem to want to pull out DeletedImages.  What am I doing wrong?