All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi I am getting in the below data (green box in image). In green is the raw data and in purple is the event data.  The issue is there are 3 source types in one and I need a way to separate them... See more...
Hi I am getting in the below data (green box in image). In green is the raw data and in purple is the event data.  The issue is there are 3 source types in one and I need a way to separate them into 3 source types using transforms (Or something like that).  However as the data is event data, how do I do that? For example, in the past when I had to create a new source type  I could use something like this.     [AMBER_RAW] SEDCMD-remove_header = s/^.*?\{/{/1 SHOULD_LINEMERGE = false NO_BINARY_CHECK = true TRANSFORMS-sourcetye_routing = AMBER_RAW_json_EVENT,AMBER_RAW_json_TRACE,AMBER_RAW_json_METRIC EXTRACT-CLUSTER_MACHINE_TEST = ^(?:[^\[\n]*\[){2}(?P<CLUSTER_MACHINE_TEST>[^/]+)       This shows the three different source types that is possible. So i need to create 3 different ones form the orginal one.    
Hi   I am new to OT, and I am struggling with a use case that I could really use some advice on, please I have a test case where we need to send 3.3K logs per second over HEC to Splunk using.   ... See more...
Hi   I am new to OT, and I am struggling with a use case that I could really use some advice on, please I have a test case where we need to send 3.3K logs per second over HEC to Splunk using.   The data is being currently sent into one index and one source type (exporter below) The data is 3 different sourcetype defined by the event “logs.type” The issue is the speed in searching   The SPL I am using is below, but it is not fast for high volumes index="murex_logs" | regex mx.env="dell967srv:15017"  ```We have multiple environment sending in the data ``` | regex log.type="http" ```http is one of 3 that the data source could be``` To me this will be very slow 1st running regex is probably slow and also it has to sort out http from the other data (3 types).   I am talking to dev to see if they can send the data on three different exporters (Below)   However I still must run a regex mx.env="dell967srv:15017 to find the environment that I need.exporters:  splunk_hec/logs_1: # pushed to splunk    token: "a04daf32-68b9-48b2-88a0-6ac53b3ec002"    endpoint: https://mx33456vm:8088/services/collector    source: "mx"    sourcetype: "otel"    index: "murex_logs"    tls:      insecure_skip_verify: true Some of the possible answers I am looking into are Use transforms: To create 3 different sourcetype If possible? But can I do this on event data (regex mx.env="dell967srv:15017" )[Purple below], I know I can do it on raw data but not sure about event data (Green below) Ask the dev team to send the data by log.type and don’t put all the data into one index (But I will still have to use regex for the host) This is my log data. The green data is the Raw data – the Purple is the Event data. Any help would be amazing – thanks in advance   
will there be a problem with compatibility if the deployment server version is different from the splunk UF or HF ????? example :- if the Deployment Server ver is 7 and the Splunk UF or HF ver is 8... See more...
will there be a problem with compatibility if the deployment server version is different from the splunk UF or HF ????? example :- if the Deployment Server ver is 7 and the Splunk UF or HF ver is 8.   please provide splunk documentation. 
Root Cause(s) The percentage of non high priority searches skipped (100%) over the last 24 hours is very high and exceeded the red thresholds (20%) on this Splunk instance. Total Searches that we... See more...
Root Cause(s) The percentage of non high priority searches skipped (100%) over the last 24 hours is very high and exceeded the red thresholds (20%) on this Splunk instance. Total Searches that were part of this percentage=1. Total skipped Searches=1
Hi experts, Could you please advise me about SPL? Given the data below, I would like to rewrite the id with a type value of 2 to the id value with a type value of 1 for data with the same axid va... See more...
Hi experts, Could you please advise me about SPL? Given the data below, I would like to rewrite the id with a type value of 2 to the id value with a type value of 1 for data with the same axid value. ---------------------- axid, id, type ---------------------- 0001,abc,1 0001,def,2 0001,ghi,2 0002,jkl,1 0002,mno,2 0002,pqr,2 Expected results follow ---------------------- axid, id, type ---------------------- 0001,abc,1 0001,abc,2 0001,abc,2 0002,jkl,1 0002,jkl,2 0002,jkl,2 Thanks in adance!!
I am trying to setup a federated index, on a federated search head, but i am only able to select an index as the remote dataset. the drop down for dataset type does not offer any other option. How do... See more...
I am trying to setup a federated index, on a federated search head, but i am only able to select an index as the remote dataset. the drop down for dataset type does not offer any other option. How do i have to configure the dataset on the remote search head in order to be able to use it on the federated search head. Bot systems are clustered search heads running Splunk enterprise 8.2.2
SOAR version 5.1.0.70187 on-prem installation. Can you please advise, how I can install a Python 2 app from the source code?   The python 2 app in question is GitHub - splunk-soar-connectors/talo... See more...
SOAR version 5.1.0.70187 on-prem installation. Can you please advise, how I can install a Python 2 app from the source code?   The python 2 app in question is GitHub - splunk-soar-connectors/talosintelligence
In classic dashboard, we can use simple xml to create multiple tabs in a dashboard.  In dashboard studio, how to create multiple tabs ? Please help. splunk version: 8.2.5
Upgrade Readiness App 3.1.0 reports Splunk App for Lookup File Editing 3.6.0 is not compatible with Python 3.   It seems that recently updated Lookup File Editing 3.6.0 information is missing at t... See more...
Upgrade Readiness App 3.1.0 reports Splunk App for Lookup File Editing 3.6.0 is not compatible with Python 3.   It seems that recently updated Lookup File Editing 3.6.0 information is missing at the line 29 in the  $SPLUNK_HOME/etc/apps/python_upgrade_readiness_app/bin/libs_py3/pura_libs_utils/splunkbaseapps.csv.  Will Upgrade Readiness App maintainers will update splunkbaseapps.csv and release? Or simply I can put the entry and skip?   ;3.6.0#8.1|8.2|      
Our customer have 2 Windows Storage Server 2016 Standard which are performing data storage and backup for Splunk servers, in which we have recently identified a SMB related vulnerability since the "M... See more...
Our customer have 2 Windows Storage Server 2016 Standard which are performing data storage and backup for Splunk servers, in which we have recently identified a SMB related vulnerability since the "Microsoft network client: Digitally signed communication" is disabled. My client would like to enable this policy to ensure that packet signing is done for SMB and hence mitigate this VA finding. However I have question, if this will create any issue with the file sharing between Splunk server and these storage servers once this changes has been made?
I have created a dashboard that allows you to enter a user and their information then write all of it to a lookup table. I need to help adjusting the search queries so that when you select add it wri... See more...
I have created a dashboard that allows you to enter a user and their information then write all of it to a lookup table. I need to help adjusting the search queries so that when you select add it writes the user to the lookup table and when you select remove it removes any instance where the users name is found in the lookup table. Here is my xml so far:   <panel depends="$add$"> <title>Add User</title> <table> <search> <query>| inputlookup usb.csv | append [ | makeresults | eval user="$user_tok$", email="$email_tok$", description="$description_tok$", revisit="$revisit_tok$", Action="$dropdown_tok$" | fields - _time ] | table user, email, description, revisit | outputlookup usb.csv</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> <panel depends="$remove$"> <title>Remove User</title> <table> <search> <query>| inputlookup usb.csv | where user != "" | table user, email, description, revisit | outputlookup usb.csv </query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel>  
| makeresults | eval value=1 | stats count as "count: 1", count by value If you try to use chart overlay with "count: 1" using the above query, nothing happens.  It doesn't apply the overlay.  If y... See more...
| makeresults | eval value=1 | stats count as "count: 1", count by value If you try to use chart overlay with "count: 1" using the above query, nothing happens.  It doesn't apply the overlay.  If you try with "count", it works as expected.  Both have worked in the past.  Is this fixed in a future version?  I didn't see it on the known issues page.  Thanks.  
I want to get an alert and run it but there are items I wanted to remove.   | rest "/servicesNS/-/-/saved/searches" | search title="SomeAlert" | fields qualifiedSearch   So far I am able to g... See more...
I want to get an alert and run it but there are items I wanted to remove.   | rest "/servicesNS/-/-/saved/searches" | search title="SomeAlert" | fields qualifiedSearch   So far I am able to get my search but there is a line in there I want to remove, and then display my result. For example if the following was a line in qualifiedSearch.   | rename test1 as test, rename operation1 as operation   Is there an easy way I can use rex or something else to find this string in qualifiedSearch and remove it?
I have a dropdown in a dashboard that uses a lookup table with columns X and Y.  The values in X are unique; the values in Y are not.  I am using X in "Field for Label" and Y in "Field for Value". ... See more...
I have a dropdown in a dashboard that uses a lookup table with columns X and Y.  The values in X are unique; the values in Y are not.  I am using X in "Field for Label" and Y in "Field for Value". The problem is that when I do not dedup Y, I get an error below the dropdown that says, Duplicate values causing conflict. The search string I'm using is this: | inputlookup LT | fields X, Y When I change it to this: | inputlookup LT | fields X, Y | dedup Y sortby X ...the error disappears. What I would like is to retain all the values of X instead of removing some with the dedup operation.  Is this possible?
Hello Everyone. I wonder if anyone could help me with a report I'm trying to make. Below is my sample logs format. log1 example. ipfield sessionfield - - timefield urlfield methodfield  log... See more...
Hello Everyone. I wonder if anyone could help me with a report I'm trying to make. Below is my sample logs format. log1 example. ipfield sessionfield - - timefield urlfield methodfield  log2 example datefied midfield sessionfield2 sessionfield3 userfield functionfield ipfield2 rolefield.   what I want to do is search log2 if the sessionfield in log1 exists, then print out a table that has  userfield from log2, ipfield from log1orlog2, all sessionfield from log1 and log2,   userfield from log2, urlfield and mehtodfield and the counts of methodfield.   I have something like this  (index=1 log2) OR (index=1 log1)| eval sessionfield=coalesce(sessionfield,sessionfield2,sessionfield3) | stats values(sessionfield) values(ipfield2) by sessiontuser I got the sessionfield(s) to print but it did not print the sessionfield in log1. I could not figure out how to print the other fields that I needed  I don't have much experience in Splunk search so any guidance or help would be excellent. thank you.    
Hi, Is there any way we could find the daily average of volume of data(not from transaction summary standpoint but from the daily size standpoint).
In my splunk logs, i have 2 IPs in 1 field name. I want to extract both IPs create a new field as IP1 & IP2. Please help here. The user XYZ was involved in an impossible travel incident. The user... See more...
In my splunk logs, i have 2 IPs in 1 field name. I want to extract both IPs create a new field as IP1 & IP2. Please help here. The user XYZ was involved in an impossible travel incident. The user connected from two countries within 280 minutes, from these IP addresses: United States (205.000.000.0) and Italy (37.000.000.00). If any of these IP addresses are used by the organization for VPN connections and do not necessarily represent a physical location, we recommend categorizing them as VPN in the IP Address range page in Microsoft Defender for Cloud Apps portal to avoid false alerts.   Example IP1 - 205.000.000.0 IP2 - 37.000.000.00
This search will display port numbers from the Endpoint datamodel | tstats 'summariesonly ' count from datamodel=EndPoint.Port.dest_port  I would like to create a search that will show other fiel... See more...
This search will display port numbers from the Endpoint datamodel | tstats 'summariesonly ' count from datamodel=EndPoint.Port.dest_port  I would like to create a search that will show other fields like dest_bunit with the port. Without the datamodel I could just do a stats count by dest port.  I'm not sure how to replicate this query using the datamodel. 
How can I pull 3 tokens from a single dropdown search? - I would like our users to select the case_idz, and have the _time value populate from the same dropdown (I know I can append this to the indiv... See more...
How can I pull 3 tokens from a single dropdown search? - I would like our users to select the case_idz, and have the _time value populate from the same dropdown (I know I can append this to the individual searches with the case_idz token, but that seems very brute force and inelegant.) Here is the populating search: | tstats count WHERE index=cases BY source, _time | fields source, _time | rex field=source max_match=0 "^[A-Z]:\\\\([^\\\\]*)\\\\([^\\\\]*)\\\\(?P<case_idz>[^\\\\]*)" | stats count by case_idz, _time | fields case_idz, _time | stats earliest(_time) AS earliest_event, latest(_time) AS latest_event by case_idz | convert ctime(earliest_event) ctime(latest_event) Which gives a table of: case_idz earliest_event latest_event I would like to turn each of these into a token: $case_idz$ $earliest_event$ $latest_event$ The case_idz is the value that they need to pivot off of, and the earliest_event and latest_event are the second and third tokens that I would like to leverage to set the earliest and latest time values for the searches. Other than taking components of this search and adding it to each and every dashboard, how can I have the three variables trigger in one pass?
I have a field properties.policies  in json format  field value: [{"fieldname":"fieldvalue","fieldname":"fieldvalue","fieldname":"fieldvalue",[priview] "fieldname":"fieldvalue",[]}] i want to rem... See more...
I have a field properties.policies  in json format  field value: [{"fieldname":"fieldvalue","fieldname":"fieldvalue","fieldname":"fieldvalue",[priview] "fieldname":"fieldvalue",[]}] i want to remove first and last [] so that other fields can populate  can some one send me the rex please ? Thanks in advance