All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello Guys, I am running app-inspect on my add-on and I am encountering one failure which I am unable to resolve. Please find below the failue. Should not it be false-positive? How to deal with this... See more...
Hello Guys, I am running app-inspect on my add-on and I am encountering one failure which I am unable to resolve. Please find below the failue. Should not it be false-positive? How to deal with this.  { "checks": [ { "description": "Check that the app does not include viruses.", "messages": [ { "code": "reporter.fail(message)", "filename": "check_viruses.py", "line": 41, "message": "An issue was found by ClamAV: A virus was detected by ClamAV: FOUND PUA.Html.Exploit.CVE_2014_0322-1", "result": "failure", "message_filename": null, "message_line": null } ], "name": "check_for_viruses", "tags": [ "splunk_appinspect", "cloud", "antivirus", "private_app" ], "result": "failure" } ], "description": "Malware, viruses, malicious content, user security standards (dynamic checks)", "name": "check_viruses" }   Thanks & Regards, Madhuri
Hi, I am trying to use the slim utility for validating and packaging my app. When running slim validate, it complains about undefined setting python.version in alert_actions.conf even though it is ... See more...
Hi, I am trying to use the slim utility for validating and packaging my app. When running slim validate, it complains about undefined setting python.version in alert_actions.conf even though it is clearly documented: https://docs.splunk.com/Documentation/Splunk/8.1.2/Admin/Alertactionsconf Did I miss something?   slim validate: Validating app at "<My App name>"... slim validate: [WARNING] /path/to/<My App name>/default/alert_actions.conf, line 2: Undefined setting in stanza [default]: python.version    
I have a query which runs once a day and which produces a list of all countries a user has visited over the last 30 days.  Below is an example of this list which I call 'Country_hist': I have an... See more...
I have a query which runs once a day and which produces a list of all countries a user has visited over the last 30 days.  Below is an example of this list which I call 'Country_hist': I have another query which runs far more often, and which generates a list of all countries a user has visited over the past 24 hours. I want to compare the list of countries from this query with the countries on the  'Country_hist'-list. I added a screenshot below to show you what I mean. On the left side is a column with the countries a user visited over the past 24 hours, on the right side we have to history columns: I then do a check if items in the left column are also in the right column: has the user visited a country in the last 24 hours which he has not visited before over the past 30 days. My problem is that both lists are not the same. In the screenshot above please look at the location of the cursor: in the left hand column Japan and South Korea are not on the same row (separated by what seems to be an enter), while they are on the same row in the right hand column. So the left column contains: Japan South Korea   And the right column contains: Japan South Korea     Currently I use this function to compare the two columns: | eval check=if(isnull(Country_hist), 0, if(like(Country_hist, "%"+$Country_lst1$+"%"), 0, 1)) This returns a 1 in the example above, but I would want it to return a 0 (since the visited countries are the same). How can I fix this, so Splunk understands that values are the same even if they are on different rows? Thanks!
Hello Trying to setup splunk alert where in list the java process consuming more than 80 % cpu and memory and trigger an alert. Below is the base script that created but not sure how to add condition... See more...
Hello Trying to setup splunk alert where in list the java process consuming more than 80 % cpu and memory and trigger an alert. Below is the base script that created but not sure how to add condition. Please help. top host=xzy index=os java latest=now| top limit=5 COMMAND PID pctCPU pctMEM
I am trying to get logs from a firewall into splunk. Usually i work with regex to extract the fields, but these logs dont come in a predictable manner, so the fields are not always there, or in the s... See more...
I am trying to get logs from a firewall into splunk. Usually i work with regex to extract the fields, but these logs dont come in a predictable manner, so the fields are not always there, or in the same order. So what i try to do is let splunk automatically detect the fields for itself. Given following example raw log:   2021-02-15T09:50:22Z %FTD-6-430002: EventPriority: Low, DeviceUUID: ef9c5cce-2400-11eb-a8f2-ce5d579dab29, InstanceID: 5, FirstPacketSecond: 2021-02-15T09:50:22Z, ConnectionID: 26265, AccessControlRuleAction: Allow, SrcIP: 195.180.144.165, DstIP: 172.16.20.86, SrcPort: 49609, DstPort: 443, Protocol: tcp, IngressInterface: INT_DMZ_External, EgressInterface: INT_LAN, IngressZone: DMZ_External, EgressZone: LAN, IngressVRF: Global, EgressVRF: Global, ACPolicy: Merbag Default Access Control Policy, AccessControlRuleName: WAP2LAN, Prefilter Policy: Merbag Default Prefilter Policy, Client: SSL client, ApplicationProtocol: HTTPS, InitiatorPackets: 3, ResponderPackets: 1, InitiatorBytes: 389, ResponderBytes: 70, NAPPolicy: Balanced Security and Connectivity, URLReputation: Unknown, URL: https://adfs.company.com   I have tried lots of things in props.conf but sadly, nothing seems to work.   [firewall] category = Network & Security TIME_FORMAT = %Y-%m-%dT%H:%M:%SZ TZ = Europe/London KV_MODE = auto FIELD_DELIMITER = , HEADER_FIELD_DELIMITER = :   So the first 3 lines work great, but the last 3 lines dont seem to work at all. Sadly, there is not onboard GUI in splunk where i can run the config against a log and see the output live. Its just editing the config file and restarting the service over and over again. It would be helpful if there were actual examples (config & example log) in the documentation.
Hi there, Got some pain with aggregating results from 2 queries, which seemed simple at first glance... Query 1:   sourcetype="xxxx" severity=medium OR severity=high OR severity=critical | timech... See more...
Hi there, Got some pain with aggregating results from 2 queries, which seemed simple at first glance... Query 1:   sourcetype="xxxx" severity=medium OR severity=high OR severity=critical | timechart span=1d count by severity   Query 2:   sourcetype="yyyy" request_status=blocked violation_rating>=3 | eval severity=case(violation_rating=3,"medium",violation_rating=4,"high",violation_rating=5,"critical") | timechart span=1d count by severity     The 2 queries are producing same columns ( _time, critical, high, medium), but I find it fairly difficult to simply aggregate the results...   If you have any hints...   This is producing NULL values and the values in the output are not correct:   (sourcetype="yyyy" request_status=blocked violation_rating>=3) OR (sourcetype="xxxx" severity=medium OR severity=high OR severity=critical) | eval severity=case(violation_rating=3,"medium",violation_rating=4,"high",violation_rating=5,"critical") | timechart span=1d count by severity    
I have a scenario where I need to send email to owner of the account when it is locked. I want to run the query for every 5mins checking for last1 hr data, and the email should be sent only once in a... See more...
I have a scenario where I need to send email to owner of the account when it is locked. I want to run the query for every 5mins checking for last1 hr data, and the email should be sent only once in a day even the account is locked multiple times in a day. And the email configuration is done in the query itself, passing owner field values to email address. As we cannot use throttling in this case(as mail is being sent from search query), could anyone please suggest a solution?
NOTICE: <script>: [3473090307|3167225225](SENDER[10.65.197.2:5073]): Current Active Inbound Calls: NOTICE: <script>: [3218481898|03116204181](SENDER[192.168.15.11:7060]): Current Active Inbound Calls... See more...
NOTICE: <script>: [3473090307|3167225225](SENDER[10.65.197.2:5073]): Current Active Inbound Calls: NOTICE: <script>: [3218481898|03116204181](SENDER[192.168.15.11:7060]): Current Active Inbound Calls: 8 I want to extract the integer value after the colon (:) i.e. 0 and 8 and then display these results as timechart. I'm writing it as: host=Kamailio NON=Active | eval totalCount=mvcount(NON) | timechart span=300s count by totalCount   p.s: NON is a field with multiple other values and Active is one of them which contains those integers which I want to display.   Any degree of help would be appreciated.
  I have an event value like this  2021-02-15 18:07:33,936, where the last value after comma(936) means the response time in ms. i tried to extract that value and want to average response time but i... See more...
  I have an event value like this  2021-02-15 18:07:33,936, where the last value after comma(936) means the response time in ms. i tried to extract that value and want to average response time but it did not work. how i can extract the value after comma from that field. i tried something like this  avg(mvindex(split(TimeStamp,","),-1)) as AverageResponse TimeStamp=2021-02-15 18:07:33,936   Best Regards Foysal
We are facing one issue where type is number for any kvstore. When we enter the value from the lookup editor, it is adding space in the value.
Hello,   I have a data stream getting populated every 5 minutes as below. There are 100s of features in the data. Feature: Rectagle Side: 4 User: A Used:1 Time: 1/25/2021 5:00:00 Block:1 Fea... See more...
Hello,   I have a data stream getting populated every 5 minutes as below. There are 100s of features in the data. Feature: Rectagle Side: 4 User: A Used:1 Time: 1/25/2021 5:00:00 Block:1 Feature: Rectagle Side: 4 User: A Used:1 Time: 1/25/2021 5:00:00 Block:2 Feature: square Side: 4 User: B Used:1 Time: 1/25/2021 5:05:00 Block:1 Feature: Square Side: 4 User: B Used:1 Time: 1/25/2021 5:05:00 Block:2 I need to sum the side for each side field along with the used column Something as below. Feature: Rectangle Side: 8 used:2 Time: 1/25/2021 5:00:00   The problem I have is, I could not sum the side either it comes as total across all events (it can be 4 * number of times it access across the period I selected) or 4. I am searching for a period of last 24 hours. I tried with this, but not giving right value. | foreach feature* [ eval subtotal = subtotal + 'side'] | stats max(subtotal) as TOTAL by _time  
Hi,   In my organization a particular user id  has been disabled and is there any drawback on searches or in running environment. for example below. 1. if user has been created any searches and in... See more...
Hi,   In my organization a particular user id  has been disabled and is there any drawback on searches or in running environment. for example below. 1. if user has been created any searches and in case id has been disabled and after the disable private searches will be run or not. 2 and is there any option to modify private searches.   Thanks and regards, Nikhil Dubey @4nikhildubey@gmail.com
Hi, I'm trying to pull the event logs when an account is being locked in Active Directory, but I could see multiple entries for single account, one entry for each 1 or 2 hrs . Could please help me i... See more...
Hi, I'm trying to pull the event logs when an account is being locked in Active Directory, but I could see multiple entries for single account, one entry for each 1 or 2 hrs . Could please help me in understanding why duplicate entries are being generated in splunk?
Hi , We noticed errors in the splunkd.log. These are all the messages from Timeliner that appears on the search head : Error 11-11-2020 18:15:23.008 +0100 WARN  Timeliner - Error requesting remot... See more...
Hi , We noticed errors in the splunkd.log. These are all the messages from Timeliner that appears on the search head : Error 11-11-2020 18:15:23.008 +0100 WARN  Timeliner - Error requesting remote event from https://xyz return code 404 11-11-2020 18:15:23.011 +0100 ERROR Timeliner - 50 Events missing due to corrupt or expired remote artifact(s). 11-11-2020 18:15:28.389 +0100 ERROR Timeliner - 50 Events missing due to corrupt or expired remote artifact(s). 11-11-2020 18:15:29.204 +0100 ERROR Timeliner - 36 Events missing due to corrupt or expired remote artifact(s). 11-11-2020 18:15:29.686 +0100 ERROR Timeliner - 50 Events missing due to corrupt or expired remote artifact(s). 12-04-2020 20:24:12.263 +0100 WARN  Timeliner - Error requesting remote event from https://xyz, return code 404 12-04-2020 20:24:12.266 +0100 ERROR Timeliner - 50 Events missing due to corrupt or expired remote artifact(s). Could you, please, check and advise on this?
i am trying to write a single query like below, Id is the common field in all the queries. query1 + join[query 2], query1 + join[query3]  Able to join query1 with query2 but not sure how to join qu... See more...
i am trying to write a single query like below, Id is the common field in all the queries. query1 + join[query 2], query1 + join[query3]  Able to join query1 with query2 but not sure how to join query1 with query3 in a single query..can someone pls help
Hi!  This is our first time to deploy Splunk Enterprise environment. So, I would like to confirm the composition of our servers for Splunk Enterprise. [Question] Is it possible to deploy Sp... See more...
Hi!  This is our first time to deploy Splunk Enterprise environment. So, I would like to confirm the composition of our servers for Splunk Enterprise. [Question] Is it possible to deploy Splunk Enterprise environment with the following servers? 1. Search Head (1 server) 2. Indexer (2 servers with clustering) 3. Deployment server and License Master (1server) 4. Cluster Master(1server) We will use this environment for 1st step of Splunk utilize.  (This is an environment where the current status of the 1st step can be created while waiting for the 2nd step hardware to be built.) We will create the environment for 2nd step. If the 2nd step environment is deployed, we will change connection setting of universal forwarder from 1st step environment to 2nd step environment. Best, Regards.
Hi,  Is there was to dynamically pass a value like below in Splunk for running a search from cli. I am trying to write a script to find event count from source files on HF and compare event to coun... See more...
Hi,  Is there was to dynamically pass a value like below in Splunk for running a search from cli. I am trying to write a script to find event count from source files on HF and compare event to count indexed by running the below search  /opt/splunk/bin/splunk search 'index=*  source=${c2_source}/*.gz  | stats count' -uri 'https://<SH IP>:8089/' -auth admin:xxxxxxxxxx  2>/dev/null Or  is there way to achive using restapi commands
So far, I have successfully completed the following steps: I pulled the latest phdrieger/mltk-container-golden-image-cpu:3.5.0 docker image from docker hub to my laptop using Docker for Windows 10 ... See more...
So far, I have successfully completed the following steps: I pulled the latest phdrieger/mltk-container-golden-image-cpu:3.5.0 docker image from docker hub to my laptop using Docker for Windows 10 (v20.10.2) (I have tried "phdrieger/mltk-container-golden-image-gpu:3.5.0" a number of times, but for some reason, it does not work for me...) I ran a docker container (single-mode) from DLTK 3.4 in Splunk Enterprise 8.1.2. The __dev__ container in the DLTK is running (Container Model Info: Container Image = "TensorFlow CPU (deprecated)" / GPU runtime = "none" / Cluster target = "docker"). I can access the JupyterLab, TensorBoard, and API URLs without any issue I can connect to the container via the command line as well. As the next step, I attempted to send some sample data from Splunk to the container using the search command: | inputlookup diabetes.csv | fit MLTKContainer response from * algo=binary_nn_classifier epochs=10 mode=stage into MyModel (The sample search command provided in the "binary_nn_classifier.ipynb" file in Stage 1.) I noticed that the line starting with "| fit MLTKContainer" does not work. Any command lines above "| fit MTLKContainer..." work fine. I tried a couple of other Notebooks for TensorFlow 2.0 listed Jupyter Lab in the container without any luck. All have the same issue. I always make sure that the container is running from both Splunk and Docker sides. Please let me know what I might have missed in the setup/configuration. Thank you.
Hi Team, Can anyone guide me how can we export a file. I am not getting that option. I have attached the screenshot. Thanks in advance.
I have setup a second Deployment Server for disaster recovery purposes. I am using rsync in a cron job to copy the deployment-apps directory and the serverclass.conf file over from the primary DS to ... See more...
I have setup a second Deployment Server for disaster recovery purposes. I am using rsync in a cron job to copy the deployment-apps directory and the serverclass.conf file over from the primary DS to the cold DS. When I look at my cold DS Forward Management console my Apps number match with the primary DS but my Server Classes are short by three server classes. I grep the serverclass.conf from both servers and the missing classes are not there plus I have looked in the default/serverclass.conf. What would cause three not to copy over? Is there another file I need to grab off the primary DS?