All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

We are receiving data via a diode. However, event logs are from multiple hosts. How can we parse data from different hosts and direct it to the indexers?
Hi guys,   I'm a splunk noob here and I'm going nuts. I know this is an extremely simple search and I can't get it right. I'm trying to create a search for remote access applications based on ou... See more...
Hi guys,   I'm a splunk noob here and I'm going nuts. I know this is an extremely simple search and I can't get it right. I'm trying to create a search for remote access applications based on our firewall index. IP cidr will be pulled from a lookup file (network_assets.csv) and matching to the source ip from my events. There's fields from the lookup file that do not exist in the events. I'm particullarly interested in adding this field called usertags (which is included in the lookup).  I am using these links as a reference and I can't get it to work. https://community.splunk.com/t5/Splunk-Search/How-do-I-append-columns-to-a-search-via-inputlookup-where-the/m-p/402136 index=fw | search appcat=Remote.Access | search app!="RDP" AND app!="WMI.DCERPC" | lookup network_assets.csv cidr | eval cidr=src | search usertags="*server*" | table src dest app url appcat usertags My search currently does not give me any results. Any help would be much appreciated
Hi Friends,   I am trying to list out all the available splunk lookups and want to display count of records present in each lookups. However i found rest command to list out all the lookups but... See more...
Hi Friends,   I am trying to list out all the available splunk lookups and want to display count of records present in each lookups. However i found rest command to list out all the lookups but how to get count of records for each lookup ? Rest command :  | rest/servicesNS/-/-/data/lookup-table-files |table title  I want to display count of records present in each lookup in another column. Is it possible to display with SPL? Requesting your valuable feedback and help. Thank you in advance. Himanshu    
Hi All, I am new to the UF on Windows and here is the deployment in my lab: 1 Splunk Enterprise instance running on Centos8 1 UF running on Windows pointing to the instance above For now, I a... See more...
Hi All, I am new to the UF on Windows and here is the deployment in my lab: 1 Splunk Enterprise instance running on Centos8 1 UF running on Windows pointing to the instance above For now, I am able to retrieve the events on seach bar like "host="DESKTOP-JQJVH8A" source="WinEventLog:Security"". What I am confused is about the configuration file: outputs.conf: D:\SplunkUniversalForwarder\etc\system\local inputs.conf: D:\SplunkUniversalForwarder\etc\apps\SplunkUniversalForwarder\local Why is the inputs.conf not in the same directory as outputs.conf, is this owning to the installation? Say I would like to add some more stanzas in the inputs.conf, do I need to create a new inputs.conf in etc\system\local or modify the existing one in etc\apps\SplunkUniversalForwarder\local? Thanks.
Hi, I have a Splunk Cloud trial instance. I am using a Sprint Boot application to make a simple HttpPost call to the HEC in Batch mode.  The format of the event is JSON and I am not adding line bre... See more...
Hi, I have a Splunk Cloud trial instance. I am using a Sprint Boot application to make a simple HttpPost call to the HEC in Batch mode.  The format of the event is JSON and I am not adding line breaks between two events.  Splunk is receiving the requests and adding them as events. However, each of my events is getting truncated and is not showing up as a well formed JSON. I can see that the entire event is not being added, and when I measured the size of each event, it was coming up to 10kb. I then found this: https://docs.splunk.com/Documentation/Splunk/latest/Data/Configureeventlinebreaking Specifically, I think I'm being impacted by this: The Splunk platform uses the LINE_BREAKER and TRUNCATE settings to evaluate and break events over 10kB into multiple lines of 10kB each. Questions 1. Is there no way to send events to Splunk Cloud larger than 10 kb? 2. If it is indeed supported, what configuration do we need which can be performed via Splunk Web, since we don't have access to config files etc in Splunk Cloud? Is it something related to Source Types (Advanced config)?
My company was acquired, we just migrated email domains, but need to update all user's email addresses so they can use google auth to sign in. I can't modify emails addresses as an admin. Any simple ... See more...
My company was acquired, we just migrated email domains, but need to update all user's email addresses so they can use google auth to sign in. I can't modify emails addresses as an admin. Any simple solution?
I am trying to Install the Cluster Agent with the Kubernetes CLI I am getting the below error  #kubectl create -f cluster-agent.yaml error: error validating "cluster-agent.yaml": error validating ... See more...
I am trying to Install the Cluster Agent with the Kubernetes CLI I am getting the below error  #kubectl create -f cluster-agent.yaml error: error validating "cluster-agent.yaml": error validating data: [ValidationError(Clusteragent.spec): unknown field "account" in com.appdynamics.v1alpha1.Clusteragent.spec, ValidationError(Clusteragent.spec): unknown field "appName" in com.appdynamics.v1alpha1.Clusteragent.spec, ValidationError(Clusteragent.spec): unknown field "controllerUrl" in com.appdynamics.v1alpha1.Clusteragent.spec, ValidationError(Clusteragent.spec): unknown field "serviceAccountName" in com.appdynamics.v1alpha1.Clusteragent.spec]; if you choose to ignore these errors, turn validation off with --validate=false When i change the apiVersion: appdynamics.com/v1 kubectl create -f cluster-agent.yaml error: resource mapping not found for name: "k8s-cluster-agent" namespace: "appdynamics" from "cluster-agent.yaml": no matches for kind "Clusteragent" in version "appdynamics.com/v1" ensure CRDs are installed first Please help me to resolve this kubectl version Client Version: v1.24.0 Kustomize Version: v4.5.4 Server Version: v1.22.6
Hi All, I am trying to built the parsing stanza for one of the data, while testing I am getting an pop-up message stating that "could not use the strptime to parse timestamp from “2022-26-05T11:29:... See more...
Hi All, I am trying to built the parsing stanza for one of the data, while testing I am getting an pop-up message stating that "could not use the strptime to parse timestamp from “2022-26-05T11:29:57”.   As soon as I apply the Time_Format stanza the Splunk is throwing the message.  I am not sure what I am missing here.  so could you please help me resolving this issue.   Event details: <Event CompactMode="1" sEventType="OpResult" dwBasicEventType="9" dwAppSpecificEventID="5000" sEventID="EVENT_ID_SCHEDULER_STARTED" sOriginatingApplicationName="RED Identity Management Console" sOriginatingApplicationComponent="Scheduler" sOriginatingApplicationVersion="5.5.3.0" sOriginatingSystem="XXXXXXXXXXXXX" sOriginatingAccount="XXXX\XXXXX" dtPostTime="2022-26-05T11:29:57" sMessage="RED Identity Management Console (running as user XXXX\XXXXX) on system XXXXXXXXXXXXX; - background processor started"/> Props stanza SHOULD_LINEMERGE=false LINE_BREAKER=([\r\n]+)\<Event NO_BINARY_CHECK=true TIME_PREFIX=dtPostTime\=\" TIME_FORMAT=%Y-%m-%dT%H:%M:%S MAX_TIMESTAMP_LOOKAHEAD=20 Event Details: [5/26/2022 4:09:55 PM UTC] Note: Unknown provider type; cannot verify object name 'tbl_BaseJobInfo' valid for data store. Props.conf SHOULD_LINEMERGE=false LINE_BREAKER=([\r\n]+)\[\d+\/\d{2}\/\d{4}\s\d+\:\d{2}\:\d{2}\s[^\]]+\] NO_BINARY_CHECK=true disabled=false TIME_PREFIX=^\[ TIME_FORMAT=%m-%d-%Y %I:%M:%S %p %s MAX_TIMESTAMP_LOOKAHEAD=25
Hello, I'm having a problem with Dashboard Studio in Splunk Enterprise (version 8.2.5). I would like to create a visualization with a drilldown that lets the user click on a given data point (for... See more...
Hello, I'm having a problem with Dashboard Studio in Splunk Enterprise (version 8.2.5). I would like to create a visualization with a drilldown that lets the user click on a given data point (for example a bar in a bar chart) or a given record of a table and open a new dashboard that contains more detailed visualizations. As far as I know this is possibile by adding a Link to custom URL drilldown and providing the URL of the dashboard in the configuration. Now I would like to drill down to a "details" dashboard but setting the value of a token, so that the visualizations in this "details" dashboard are already filtered by a value in a given field, e.g. /app/search/my_destination_dashboard?form.my_field=$passed_token|u$ However the value that should be assigned to the token is dynamic, i.e. it should depends on the particular data point that the user clicked. For example, If I click on a particular record of a table the value of the token (passed_token) should be set to clicked cell value. It seems this is possibile with Dashboard Studio in Splunk Cloud Platform (see here). But I was not able to reproduce it in Splunk Enterprise. Here it is an example. The following is the JSON definition of the "source" dashboard (made with Studio) where the table visualization has a Link to custom URL drilldown:   { "visualizations": { "viz_aWKTkUpc": { "type": "splunk.table", "dataSources": { "primary": "ds_e3l7tAe8" }, "title": "Number of events per item", "eventHandlers": [ { "type": "drilldown.customUrl", "options": { "url": "/app/search/test__target_1?form.item_name=$item_name|u$", "newTab": true } } ], "description": "Selected item: $item_name$" } }, "dataSources": { "ds_vW29Fvqp": { "type": "ds.search", "options": { "query": "| makeresults count=100 \n| eval _items=\"banana,apple,grapefruit,lemon,orange\" \n| makemv delim=\",\" _items \n| eval _a=10 \n| eval _rand_i = random() % _a \n| eval _n=mvcount(_items) \n| eval _j = _rand_i % _n \n| eval item = mvindex(_items, _j) " }, "name": "base" }, "ds_e3l7tAe8": { "type": "ds.chain", "options": { "extend": "ds_vW29Fvqp", "query": "| stats count by item" }, "name": "table" }, "ds_1FI28nVT": { "type": "ds.chain", "options": { "query": "| stats count by item \n| table item", "extend": "ds_vW29Fvqp" }, "name": "item_list" } }, "defaults": { "dataSources": { "ds.search": { "options": { "queryParameters": { "latest": "$global_time.latest$", "earliest": "$global_time.earliest$" } } } } }, "inputs": { "input_global_trp": { "type": "input.timerange", "options": { "token": "global_time", "defaultValue": "0," }, "title": "Global Time Range" }, "input_Sc6kQbF9": { "options": { "items": [ { "label": "All", "value": "*" } ], "defaultValue": "*", "token": "item_name" }, "title": "Select item", "type": "input.dropdown", "dataSources": { "primary": "ds_1FI28nVT" }, "encoding": { "label": "primary[0]", "value": "primary[0]" } } }, "layout": { "type": "grid", "options": {}, "structure": [ { "item": "viz_aWKTkUpc", "type": "block", "position": { "x": 0, "y": 0, "w": 1200, "h": 282 } } ], "globalInputs": [ "input_global_trp", "input_Sc6kQbF9" ] }, "description": "", "title": "Test - source" }   The following is the XML definition of the "target" dashboard (where I want to land):   <dashboard> <label>Test - target 1</label> <search id="base"> <query> | makeresults count=100 | eval _items="banana,apple,grapefruit,lemon,orange" | makemv delim="," _items | eval _a=10 | eval _rand_i = random() % _a | eval _n=mvcount(_items) | eval _j = _rand_i % _n | eval item = mvindex(_items, _j) </query> <earliest>$time.earliest$</earliest> <latest>$time.latest$</latest> </search> <search id="sel_search" base="base"> <query> | search item=$form.item_name|s$ </query> </search> <fieldset submitButton="false" autoRun="true"> <input type="time" token="time" searchWhenChanged="true"> <label>Time</label> <default> <earliest>0</earliest> <latest></latest> </default> </input> <input type="dropdown" token="item_name" searchWhenChanged="true"> <label>Select item</label> <search base="base"> <query> | stats count by item | table item </query> </search> <fieldForLabel>item</fieldForLabel> <fieldForValue>item</fieldForValue> <initialValue>*</initialValue> <default>*</default> <choice value="*">All</choice> </input> </fieldset> <row> <panel id="selected_item"> <html> <style> #selected_item { text-align: left; } </style> <p>Selected item name: <b>$form.item_name$</b> </p> </html> </panel> </row> <row> <panel> <title>Events</title> <table> <search id="table_events" base="sel_search"> <query> | table _time, item </query> </search> <option name="drilldown">none</option> </table> </panel> </row> </dashboard>   The drilldown works but you must first set the token value by using the dropdown input and then you can click on the table. II tried to modify the JSON definitio of the "source" dashboard" as shown in the example in the doc for the Splunk Cloud Dashboard Studio:   { "visualizations": { "viz_aWKTkUpc": { "type": "splunk.table", "dataSources": { "primary": "ds_e3l7tAe8" }, "title": "Number of events per item", "eventHandlers": [ { "type": "drilldown.setToken", "options": { "tokens": [ { "token": "item_name", "key": "row.item.value" } ] } } ], "description": "Selected item: $item_name$" } }, "dataSources": { "ds_vW29Fvqp": { "type": "ds.search", "options": { "query": "| makeresults count=100 \n| eval _items=\"banana,apple,grapefruit,lemon,orange\" \n| makemv delim=\",\" _items \n| eval _a=10 \n| eval _rand_i = random() % _a \n| eval _n=mvcount(_items) \n| eval _j = _rand_i % _n \n| eval item = mvindex(_items, _j) " }, "name": "base" }, "ds_e3l7tAe8": { "type": "ds.chain", "options": { "extend": "ds_vW29Fvqp", "query": "| stats count by item" }, "name": "table" } }, "defaults": { "dataSources": { "ds.search": { "options": { "queryParameters": { "latest": "$global_time.latest$", "earliest": "$global_time.earliest$" } } } }, "tokens": { "default": { "item_name": { "value": "*" } } } }, "inputs": { "input_global_trp": { "type": "input.timerange", "options": { "token": "global_time", "defaultValue": "0," }, "title": "Global Time Range" } }, "layout": { "type": "grid", "options": {}, "structure": [ { "item": "viz_aWKTkUpc", "type": "block", "position": { "x": 0, "y": 0, "w": 1200, "h": 300 } } ], "globalInputs": [ "input_global_trp" ] }, "description": "", "title": "Test - source - mod" }   i.e. removing the dropdown input, changing the eventHandlers type property to drilldown.setToken and adding a default value to the token item_name in the defaults section. Whenever I click on a row of the table, the token item_name should be assigned the value of the cell under the "item" column and the table description "Selected item name: ..." should be updated. But it seems not to work in Splunk Enterprise Dashboard Studio. However, even if it worked, the Set Token drilldown would only set the token value for the current dashboard and not redirect to an external URL. I need the drilldown to do both: set the token value and then open an URL where I pass the token value as a query parameter to land onto a "filtered" dashboad. Does anyone know if this type of drilldown is possibile with Dashboard Studio in Splunk Enterprise and how to do it?
Dear Sir or Madam, Could you please advise me about transferring logs to the Splunk server when there is no open port for listening? The only open port is 80 which is reversed proxied to 8000 thr... See more...
Dear Sir or Madam, Could you please advise me about transferring logs to the Splunk server when there is no open port for listening? The only open port is 80 which is reversed proxied to 8000 through Apache configurations for Splunk web UI as shown below: <VirtualHost *:80> ProxyPass         /  http://localhost:8000/                                                                                                ProxyPassReverse  /  http://localhost:8000/                                                                                                                </VirtualHost> I Will be so grateful if you advise me about the best solution for transferring logs without opening an additional port? I really appreciate your help and support. Kind Regards, Farid
i found for below query  the search is happening based on default time field which is  _time  , so when ever i am choosing the date and time based on default time which is '5/26/22 7:40:00.000 AM' th... See more...
i found for below query  the search is happening based on default time field which is  _time  , so when ever i am choosing the date and time based on default time which is '5/26/22 7:40:00.000 AM' then the events are populating but if i am selecting any date and time which is align with my custom time field which is 'originaltime'  then i am not getting any event , am i doing any thing wrong here index="summary_carrier_service" originalsource="*gps-request-processor-dev*" originalsourcetype= "*eu-central-1*" event="*Request" | fields event category labelType documentType regenerate businessKey businessValue sourceNodeType sourceNodeCode geoCode jobId status sourcetype source originaltime | addinfo | eval ts=strptime(originaltime,"%Y-%m-%d %H:%M:%S") | where (ts>info_min_time and ts<=info_max_time)  
We are going to integrate WAF logs from AWS SQS what is the best way to do it  ?  
Hi, I am using dbxquery to fetch the db data ,the db data is huge hence i am using maxrows=56406002. But the query is keeping loading for 30-40 mins and later throws an error as below even though... See more...
Hi, I am using dbxquery to fetch the db data ,the db data is huge hence i am using maxrows=56406002. But the query is keeping loading for 30-40 mins and later throws an error as below even though i am fetching only one year data 'Search auto-canceled' 'The search job has failed due to an error' '| dbxquery connection=XXX query="SELECT DATE, ENDDATE, BEGDA, ENDDA FROM PA2001 where BEGDA>=20160101 AND BEGDA<=20161231" maxrows=56406002 | streamstats count as SL_NO |table DATE ENDDATE BEGDA ENDDA SL_NO'
Hi All, I have setup a universal forwarder in windows machine to monitor static file which is in json format. The logs are being forwarded but the point is it is forwarded as single event like ... See more...
Hi All, I have setup a universal forwarder in windows machine to monitor static file which is in json format. The logs are being forwarded but the point is it is forwarded as single event like below :     {"Env": "someenv12”, "Name": "test12”, "feature": "TestFeature12”, "logLevel": "info", "Id": "1234", "date": 1652187242.57, "productName": “testproduct”, "process_name": “test process, "pid": 695, "process_status": "sleeping", "process_cpu_usage": 0.0, "process_ram_usage": 0.0, "metric_type": "system_process"} {"Env": "someenv1”3, "Name": "test13”, "feature": "TestFeature12”, "logLevel": “error”, "Id": "234", "date": 1652187342.57, "productName": “testproduct12”, "process_name": “test process, "pid": 685, "process_status": "sleeping", "process_cpu_usage": 0.0, "process_ram_usage": 0.0, "metric_type": “application_process} {"Env": "someenv14”, "Name": "test14”, "feature": "TestFeature13”, “info”: “error”, "Id": "2344", "date": 1672187342.57, "productName": “testproduct13”, "process_name": “test process, "pid": 695, "process_status": "sleeping", "process_cpu_usage": 0.0, "process_ram_usage": 0.0, "metric_type": “security”}     This entire thing is coming as one event. I have applied line breakers in props.conf file :     [test_sourcetype] SHOULD_LINEMERGE =false NO_BINARY_CHECK=true BREAK_ONLY_BEFORE={"Env" MUST_BREAK_AFTER=\"\} TIME_PREFIX=date TIMEFORMAT=%s%4N MAX_TIMESTAMP_LOOKAHEAD = 14     I have added it under /SplunkUniversalForwarder/etc/apps/splunk_TA_windows app/local/props. None of my line breaking is getting applied , please help me on this. Should I add props.conf under default folder ? Regards, NVP
Hi, I am creating a dashboard where the data is provided via CSV. So, I am using the inputlookup command.  However, I need to search on one specific field (or column) on the CSV and I am current... See more...
Hi, I am creating a dashboard where the data is provided via CSV. So, I am using the inputlookup command.  However, I need to search on one specific field (or column) on the CSV and I am currently using this but it is not working:   | inputlookup ABC | search Device Name = "sdf"   Can you please help?
Hi, The network request data collected by the ios device is lost. There is often a period of time when the data is not available. Android has no such problem.
Hello, we are planning to Upgrade from verison 8.0.1 to 8.26 (the latest version), but we see that CentOS reaches End of Life on December 31st. Does this version of Splunk support CentOS anymore? ... See more...
Hello, we are planning to Upgrade from verison 8.0.1 to 8.26 (the latest version), but we see that CentOS reaches End of Life on December 31st. Does this version of Splunk support CentOS anymore?  
Hello,  I am trying to create a detection of the AWS exploitation tool Pacu.py. It is to detect the use of the enumeration tool within Pacu.py, which executes the following AWS commands in less tha... See more...
Hello,  I am trying to create a detection of the AWS exploitation tool Pacu.py. It is to detect the use of the enumeration tool within Pacu.py, which executes the following AWS commands in less than a second: ListUserPolicies GetCallerIdentity ListGroupsForUser ListAttachedUserPolicies Timeframe: First Event: 2022-05-19 10:02:25 Last Event: 2022-05-19 10:02:26 Each command generates a separate event so I was wondering if it is possible to create a search which detects these command executed from the same account within a 1 second timeframe?  I am unsure how to specify a time window so if you could help, that would be greatly appreciated.  Query index="aws-cloudtrail" "GetCallerIdentity" OR "ListUserPolicies" OR "ListGroupsForUser" OR "ListAttachedUserPolicies" | table _time, principalId, userName, aws_account_id, sourceIPAddress, user_agent, command Many Thanks
Hi Everyone, I am trying to ingest the change related data from database using DB connect and using the rising column to ingest the same. I have specified the changerequestID as the rising column. ... See more...
Hi Everyone, I am trying to ingest the change related data from database using DB connect and using the rising column to ingest the same. I have specified the changerequestID as the rising column. Data has other fields as well such as creationtime,Lastmodifiedtime,Solvedtime etc.If a change is open then the entry in the database for column values such as LastModifiedtime,Solvedtime can be blank so in that case my query is if the these values get updated in the DB after sometime but since the entry before updating has already been ingested in splunk via rising column then will it get ingested in splunk? Thanks
Hi Some users complain about Splunk search. Before Splunk, they simply open the log file and look for issues. 1-As you know log files start from the first line and finish at the last line. While S... See more...
Hi Some users complain about Splunk search. Before Splunk, they simply open the log file and look for issues. 1-As you know log files start from the first line and finish at the last line. While Splunk search reverse newest event show first. 2-another issue is they can’t trace transactions with Splunk easily. because of Splunk results limitation they should set a smaller time range and imagine how hard is when in each second over 1000 transactions occurred.   FYI: Try to use “sort _raw” but it is slow. Try to use the transaction command but they have unstructured transactions not easy to find them. Try to remove the limitation but it will be slow. So they prefer to use log files instead of Splunk. How can I help them to use Splunk effectively? Any idea? Thanks