All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Need direction on how to configure Linux Auditd app to collect data from a host on an Index. Thank u 
Hi, I am using dbxquery to fetch around 800000 data from database into splunk | dbxquery connection=x query="select * from table002 " shortnames=t maxrows=800000 The above query is taking around 5... See more...
Hi, I am using dbxquery to fetch around 800000 data from database into splunk | dbxquery connection=x query="select * from table002 " shortnames=t maxrows=800000 The above query is taking around 500 seconds.The default maxrow is 1000 in dbxquery.py script.How do reduce the time taken without impaction performance of my server? P.S | noop search_optimization=false This doesnt improve my search time      
I feel like this is a known issue & I feel like it's been around for a while, reaching out to see if anyone has worked around this. I found one single post related to this where the only suggestion w... See more...
I feel like this is a known issue & I feel like it's been around for a while, reaching out to see if anyone has worked around this. I found one single post related to this where the only suggestion was to change how frequently Splunk reads in data, but tbh not much of an option here because Splunk is already known to peg out DBCPU time in my org. Short version of it is I'm having trouble with Splunk only ingesting a job from the AsyncApexJob object in my Salesforce org once, even though that job will get updated repeatedly as it goes through statuses of Queued, Running, Completed, etc. It's not every job that does this, but it's frequently enough that I can't build an accurate alert off it. There's a release note for add-on 4.2.2 that says this is a known issue: https://docs.splunk.com/Documentation/AddOns/released/Salesforce/Releasenotes#Known_issues however I'm on 4.0.3 of the Salesforce addon and my Splunk Enterprise is 7.3.3.  Has anyone else noticed this, worked around it without changing the frequency Splunk hits the org, is it to be fixed in a future update, etc.?
Hello, I want to remove all the back slashes and double quotes from following fields - conn=\"pass\"" ip=\"10.23.22.1\"" I am trying to extract with EVAL-conn = replace(conn,"\\\\(.),"") and EVAL... See more...
Hello, I want to remove all the back slashes and double quotes from following fields - conn=\"pass\"" ip=\"10.23.22.1\"" I am trying to extract with EVAL-conn = replace(conn,"\\\\(.),"") and EVAL-ip= replace(ip,"\\\\(.),"")  in my props.conf but it is not removing the last double quotes and give me following results - conn=pass" ip=10.23.22.1" Results I want :  conn=pass & ip=10.23.22.1 Can someone please help/guide me with this extraction. Thanks in Advance
Hello - I am new to splunk and am trying to do a search on data that calls out three different fields for duplicates so I can make a report out of that. Two of the fields are the name and serial numb... See more...
Hello - I am new to splunk and am trying to do a search on data that calls out three different fields for duplicates so I can make a report out of that. Two of the fields are the name and serial number, and the third field is the name and serial number combined. Any help is appreciated, thanks.
Hi, I'm trying to make a dashboard element that shows when one of our applications is restarted. So I have  a query that searches for "Starting Application". When I put this on my dashboard, I see t... See more...
Hi, I'm trying to make a dashboard element that shows when one of our applications is restarted. So I have  a query that searches for "Starting Application". When I put this on my dashboard, I see the columns "i", timestamp, event. How can I add column that shows the kubernetes_container_name? And how can I change column width and trim the original text so I get no line breaks? thanks for your help    
On a few Java servers.nodes in my environment /SaaS was getting Warning Metric registration limit of 5000 reached.  We have already increase the limit to 7500 on both nodes by adding parameters in... See more...
On a few Java servers.nodes in my environment /SaaS was getting Warning Metric registration limit of 5000 reached.  We have already increase the limit to 7500 on both nodes by adding parameters in the JVM startup file -Dappdynamics.agent.maxMetrics=7500   But again we are getting alert  "Metric registration limit of 7500 reached" Appd is only discovering 10 Business transactions on these nodes but not sure why the limit is being reached again and again. Even other nodes are running 40 BT but there is no such alert. How can we fix it permanently and find out the root cause?   Also on both of these servers HTTP Error is not being captured inside metric Errors|EIS-Dxp|HTTP Error Code : 504|Number of Error and Errors|EIS-Dxp|HTTP Error Code : 502Number of Error Is this issue also because of the metric registration limit? 
A client of mine is asking:  I’m hoping you can help me with something. I am trying to analyze the volume to a particular Apigee endpoint so I have wrote the following query where I am grouping the s... See more...
A client of mine is asking:  I’m hoping you can help me with something. I am trying to analyze the volume to a particular Apigee endpoint so I have wrote the following query where I am grouping the stats by my calculated date value to see a day-wise view for a service that hit a particular backend. index="apigee-prod-cne" sourcetype="apigee_metrics" (apiproxy="cc-cust-profile-01-v1")  target_host = "sapisugw-prd.duke-energy.com" proxy_pathsuffix = "/email/bp/retrieve" environment="prod" | dedup gateway_flow_id | spath request_verb | search request_verb != "OPTIONS"| eval yourdate = strftime(_time,"%D")| eval yourhour = strftime(_time, "%H")| eval yourmin=strftime(_time,"%M")  | stats count(x-apigee.edge.execution.stats.request_flow_start_timestamp) as hits by yourdate             What I’m not understanding is that when I add an additional parameter to group by (and changing none of my other conditions) that I suddenly see a spike in calls rather than a segmented number from the total?         Can you help me to understand what I may be missing to properly evaluate traffic through our proxies?
I have an indexer cluster and a search head environment. I've deployed the Splunk_TA_fortinet_fortigate app on both the search head and the cluster. Logs come in via syslog to syslog-ng where they ar... See more...
I have an indexer cluster and a search head environment. I've deployed the Splunk_TA_fortinet_fortigate app on both the search head and the cluster. Logs come in via syslog to syslog-ng where they are shipped to the indexer via HTTP Event Collector's raw endpoint. Logs come in via the fortigate_log sourcetype. Then the TA sets the sourcetype to the correct type via a transform. Those sourcetypes then specify TIME_PREFIX=^ in the props.conf. However, this doesn't work as there is no date field for Splunk to get the date from. There is a time field though... (see sample data below). What I want is to use the eventtime field as the timestamp. So, I create a local/props.conf file that looks like this:   [fortigate_log] MAX_TIMESTAMP_LOOKAHEAD = 0 TIME_PREFIX = eventtime= TIME_FORMAT = %s%9N [fgt_log] MAX_TIMESTAMP_LOOKAHEAD = 0 TIME_PREFIX = eventtime= TIME_FORMAT = %s%9N [fortigate_traffic] MAX_TIMESTAMP_LOOKAHEAD = 0 TIME_PREFIX = eventtime= TIME_FORMAT = %s%9N [fortigate_utm] MAX_TIMESTAMP_LOOKAHEAD = 0 TIME_PREFIX = eventtime= TIME_FORMAT = %s%9N [fortigate_anomaly] MAX_TIMESTAMP_LOOKAHEAD = 0 TIME_PREFIX = eventtime= TIME_FORMAT = %s%9N [fortigate_event] MAX_TIMESTAMP_LOOKAHEAD = 0 TIME_PREFIX = eventtime= TIME_FORMAT = %s%9N   The idea comes from a few places, notably these documents: https://splunkbase.splunk.com/app/2800/#/details (there is a section which has that) https://community.splunk.com/t5/Getting-Data-In/How-to-configure-int64-epoch-nanosecond-timestamp-as-time/m-p/26881 I deployed it to the indexer cluster first just in the traffic, utm blocks thinking that I need to override what is in the default props.conf. But that didn't help. So I added it to the incoming fortigate_log sourcetype hoping it would do the time extraction earlier in the ingestion process. Neither seems to do anything. I also tried putting it on the search head thinking that the configuration bundle it sends to the cluster may be overriding my config. Still nothing. What am I doing wrong? Any ideas? Thanks, Scott Sample data:   time=23:59:59 devname="hostname" devid="devid" slot=1 logid="0000000020" type="traffic" subtype="forward" level="notice" vd="root" eventtime=1630645200040167048 tz="-0500" srcip=1.2.3.4 srcport=35847 srcintf="port25" srcintfrole="undefined" dstip=2.3.4.5 dstport=49164 dstintf="port26" dstintfrole="undefined" srcuuid="xxx" dstuuid="xxx" sessionid=455176702 proto=17 action="accept" policyid=10873 policytype="policy" poluuid="xxxx" service="udp/49164" dstcountry="United States" srccountry="United States" trandisp="noop" duration=11699117 sentbyte=49457564285 rcvdbyte=0 sentpkt=164980295 rcvdpkt=0 appcat="unscanned" sentdelta=49457564285 rcvddelta=0    
I have a field (FIELD1) that may contain one of several strings.  These strings may appear in different locations within FIELD1.  I would like to select all records where FIELD1 contains any of these... See more...
I have a field (FIELD1) that may contain one of several strings.  These strings may appear in different locations within FIELD1.  I would like to select all records where FIELD1 contains any of these strings. Example of 4 strings:   "ABC(Z"   "DEF(Z"   "GHIJK (Z" "LMNOP (Z" What is an efficient method for selecting any records that contain any one of these strings in any location within FIELD1?
Hello, I have a script which always up and must never stopped. And I want to know how to deal with it in the inputs.conf.  If i do not put a cron interval, does it work the way I need ? Thanks.  
I have two dashboards I am currently working with. On the first dashboard I have two column chart panels, where the second one is based on the first one. So if I click on a column on the first chart... See more...
I have two dashboards I am currently working with. On the first dashboard I have two column chart panels, where the second one is based on the first one. So if I click on a column on the first chart it sets a token, based on this token the other panel appears with its results. Now if I click on one of those columns I want to set a second token. So far so good. I want to send these tokens as inputs for the second dashboard where I need to work with them. The first token should be passed to a dropdown input which works perfectly fine, no troubles here. The second token on the other hand needs to be passed to a multiselect input, which somehow doesn't work. The passing itself works perfectly fine, the url I get when opening the new tab to the other dashboard is good, but it somehow automatically changes the second token to a asterisk *. I tried tiping the url manually but it still happens, so I think something is wrong with the second dashboard. I guess there may be a problem with my inputs, since every input I have is based on another input, like a "higher level" input. My code is like this:   <fieldset submitButton="true" autoRun="false"> <input type="dropdown" token="first_level"> <label>1st Group</label> <prefix>1st_level="</prefix> <suffix>"</suffix> <fieldForLabel>1st_level</fieldForLabel> <fieldForValue>1st_level</fieldForValue> <search> <query>| inputlookup group_lookup | eval parent_group=mvindex(parent_group, -1), parent_group_count=mvcount(parent_group), 1st_level=mvindex(parent_group, -2), 2nd_level=if(parent_group_count=2, unit_name, mvindex(parent_group, -3)) | search parent_group="CEO" | stats count by 1st_level</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <change> <unset token="form.2nd_level"></unset> <unset token="form.3rd_level"></unset> <unset token="form.4th_level"></unset> <unset token="form.5th_level"></unset> </change> <choice value="*">All</choice> <default>*</default> <initialValue>*</initialValue> </input> <input type="multiselect" token="2nd_level"> <label>Second Level Group</label> <prefix>(</prefix> <suffix> OR 2nd_level="n/a")</suffix> <valuePrefix>2nd_level="</valuePrefix> <valueSuffix>"</valueSuffix> <delimiter> OR </delimiter> <fieldForLabel>2nd_level</fieldForLabel> <fieldForValue>2nd_level</fieldForValue> <search> <query>| inputlookup group_lookup | eval parent_group=mvindex(parent_group, -1), parent_group_count=mvcount(parent_group), 1st_level=mvindex(parent_group, -2), 2nd_level=if(parent_group_count=2, unit_name, mvindex(parent_group, -3)) | search parent_group="CEO" | search $1st_level$ | stats count by 2nd_level </query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <choice value="*">All</choice> <default>*</default> <initialValue>*</initialValue> </input> <input type="multiselect" token="3rd_level"> <label>Third level group</label> <choice value="*">All</choice> <default>*</default> <prefix>(</prefix> <suffix> OR 3rd_level="n/a")</suffix> <initialValue>*</initialValue> <valuePrefix>3rd_level="</valuePrefix> <valueSuffix>"</valueSuffix> <delimiter> OR </delimiter> <fieldForLabel>3rd_level</fieldForLabel> <fieldForValue>3rd_level</fieldForValue> <search> <query>| inputlookup group_lookup | eval parent_group=mvindex(parent_group, -1), parent_group_count=mvcount(parent_group), 1st_level=mvindex(parent_group, -2), 2nd_level=if(parent_group_count=2, unit_name, mvindex(parent_group, -3)) , 3rd_level=if(parent_group_count&gt;=4, mvindex(parent_group, -4), "n/a") | search parent_group="CEO" | search $1st_level$ $2nd_level$ | search 2nd_level!="n/a" | stats count by bc_name_2nd</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> </input> </fieldset>   I have two more deeper inputs, but I think you know how they work now. At first I thought the problem was resulting from the preselected "All" value, but when I changed to to nothing it didn't help. My guess is that it's coming from the Queries that are based on the higher level input, but I have no clue how I can prevent that without losing the token search filters. Any help is highly appreciated!! I've been stuck on this problem for hours now.
Posting this in the correct forum Hello everyone. Standalone Splunk Enterprise 8.2.2 on Ubuntu 21.04. I have the Hurricane Labs App for Shodan installed. Following the directions, I have obtain... See more...
Posting this in the correct forum Hello everyone. Standalone Splunk Enterprise 8.2.2 on Ubuntu 21.04. I have the Hurricane Labs App for Shodan installed. Following the directions, I have obtained a SHODAN API key to configure the App. When I go to the "App Setup" page it only shows a white rectangle, rather than somewhere to enter the API key. I have allowed access to api.shodan.io for http/https through my firewall from the Splunk server. Tried it with Firefox/Edge and disabling browser add-ons. Any idea why it is not working? Any help appreciated. Paul
Hey all, Is it possible for an overlap of Azure AD sign-ins? I don't want to have duplicate logs and wasting ingestion. Can anyone help and explain the differences between these 2? They both appear... See more...
Hey all, Is it possible for an overlap of Azure AD sign-ins? I don't want to have duplicate logs and wasting ingestion. Can anyone help and explain the differences between these 2? They both appear to have inputs for Azure AD sign-ins. https://splunkbase.splunk.com/app/4055/ https://splunkbase.splunk.com/app/3757/ Thanks
How can i extract this: "properties": {"nextLink": null, "columns": [ {"name": "Cost", "type": "Number"}, {"name": "Date", "type": "Number"}, {"name": "Charge", "type": "String"}, {"name": "Pub... See more...
How can i extract this: "properties": {"nextLink": null, "columns": [ {"name": "Cost", "type": "Number"}, {"name": "Date", "type": "Number"}, {"name": "Charge", "type": "String"}, {"name": "Publisher", "type": "String"}, {"name": "Resource", "type": "String"}, {"name": "Resource", "type": "String"}, {"name": "Service", "type": "Array"}, {"name": "Standard", "type": "String"}, "rows": [ [2.06, 20210807, "usage", "uuuu", "hhh", "gd", "bandwidth", "[app:"new","type":"band"]", "HHH"], [2.206, 20210807, "usage", "uuuhhh", "ggg", "gd", "bandwidth", "[app:"old","type":"land"]", "YYY"] ] No of columns can be increased. @ITWhisperer Can you help?
Hello, for testing purposes at home I deployed splunk in docker following https://splunk.github.io/docker-splunk/.  Splunk Enterprise and UF works flawlessly but I would like to get logs from my wi... See more...
Hello, for testing purposes at home I deployed splunk in docker following https://splunk.github.io/docker-splunk/.  Splunk Enterprise and UF works flawlessly but I would like to get logs from my windows 10 machine into the splunk docker instance.  Containers are in same network in docker but the host network is different so the windows host UF cant reach Splunk Enterprise. When switching the containers network to host, it doest work at all.  I am definitely missing something here. Is it event possible to be sending data from host to a docket container real time as i would like to? My host is windows 10 running Docker with 2 containers - Splunk UF and Splunk Enterprise. Thank you.
looHi everybody,  i hope you can help me with my pb. i want add fields in a lookup with a request that dont use index .. We dont have result so i use the fillnull option en appendpipe to create re... See more...
looHi everybody,  i hope you can help me with my pb. i want add fields in a lookup with a request that dont use index .. We dont have result so i use the fillnull option en appendpipe to create result but they don't want add the bnew fields in a lookup.. the KV store fields are fixed and defined in transforms.conf and collections.conf. for example :   | table key,Category,activation,target,tester,url |fillnull | appendpipe [ stats count | eval Category = "HOST Blacklist" | eval activation = "09/15/21" | eval target = "Un test ajout" | eval url = "http://www.test.html" | eval tester = "*test.html*" | eval key=Category.tester.target | where count==0] | fields - count | table key,Category,activation,target,tester,url | outputlookup t_lookup append=True override_if_empty=false key_field=key i see my event in search interface but not in my lookup .. have you an idea for adding field like this?? thanks for your help  
Hi everyone, I want to monitor files on a Linux server. Every hours (at minute 59), a file DATE.log is compressed into a DATE.gz. Though inputs.conf I am monitoring all the files (DATE*). I noticed... See more...
Hi everyone, I want to monitor files on a Linux server. Every hours (at minute 59), a file DATE.log is compressed into a DATE.gz. Though inputs.conf I am monitoring all the files (DATE*). I noticed that I have some logs missing for 20 minutes (from ~ the minute 37 to the minute 59) every hours between 8am to 8pm. I checked the splunkd.log and saw this error: WARN TailReader - Insufficient permissions to read file='.../DATE.gz' (hint: Permission denied) I gave reading rights on the .gz files, but maybe it's not enough as the decompression is effective on the forwarder. Should I give writing rights to my splunk user on these files? Not sure if it's gonna fix the missing logs problem but I will start with that ^^ Have a good day,
Hi All, To forward data to third-party systems, integrated  splunk agent with below configs.  Third party is able to receive data by listening on TCP port. Issue: Unable to view default internal f... See more...
Hi All, To forward data to third-party systems, integrated  splunk agent with below configs.  Third party is able to receive data by listening on TCP port. Issue: Unable to view default internal fields like source or host required for data enrichment.                Tried adding host and source in inputs.conf, but no luck. Is there any limitation for forwarding internal fields to third party systems? inputs.conf [blacklist:$SPLUNK_HOME/var/log/splunk] [monitor:///tmp/test1.log] _TCP_ROUTING = App1Group   outputs.conf [tcpout:App1Group] server=<ip address>:<port> sendCookedData = false
- name: splunk jobid receive api call uri: url: https://{{ fis_apiBaseurl }}/services/search/jobs method: POST validate_certs: false timeout: 360 force_basic_auth: yes status_code: 201, 200 , ... See more...
- name: splunk jobid receive api call uri: url: https://{{ fis_apiBaseurl }}/services/search/jobs method: POST validate_certs: false timeout: 360 force_basic_auth: yes status_code: 201, 200 , 204 headers: Accept: application/json Content-Type: "application/json" Authorization: " Bearer {{ fis_splunk_console_auth }}" return_content: true body: '{search = {{ data }}}' #body_format: json register: get_JobID delegate_to: 127.0.0.1       I am using the above code and URL is url: https://{{ fis_apiBaseurl }}/services/search/jobs  .   I am using this job is ansible tower but i am getting content as blank where as return code is 200 .   ok: [100.73.4.110] => { "get_JobID": { "cache_control": "no-store, no-cache, must-revalidate, max-age=0", "changed": false, "connection": "Close", "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<!--This is to override browser formatting; see server.conf[httpServer] to disable. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . …   Can someone help me with URL or exact query