All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

We have below stanza in /opt/splunk/etc/splunk-launch.conf file of the heavy forwarder to avoid any security risks. This means the heavy forwarder is bind to localhost only. SPLUNK_BINDIP=127.0.0.1 ... See more...
We have below stanza in /opt/splunk/etc/splunk-launch.conf file of the heavy forwarder to avoid any security risks. This means the heavy forwarder is bind to localhost only. SPLUNK_BINDIP=127.0.0.1 However, we want to receive the data on this heavy forwarder from a specific machine using the HTTP event collector app. Is there any way to receive the events without removing this stanza? Has anyone else faced a similar situation?
Hi, we are using Splunk 7.3.6. in clustered environment with intensive usage of the KVStore in the SHCluster. In this version Splunk uses the mongod with the storage engine mmapv1 which MongoDB dep... See more...
Hi, we are using Splunk 7.3.6. in clustered environment with intensive usage of the KVStore in the SHCluster. In this version Splunk uses the mongod with the storage engine mmapv1 which MongoDB deprecates in future releases. Is it possible to switch to the in memory storage engine ? We do not need persistance for the data but are looking to optimize for latency.   Thanks regards Andre   Regards Andre Fissel  
I have ingested some logs to Splunk which now looks like below when searching from search header.   {\"EventID\":563662,\"EventType\":\"LogInspectionEvent\",\"HostAgentGUID\":\"11111111CE-780... See more...
I have ingested some logs to Splunk which now looks like below when searching from search header.   {\"EventID\":563662,\"EventType\":\"LogInspectionEvent\",\"HostAgentGUID\":\"11111111CE-7802-1111111-9E74-BD25B707865E\",\"HostAgentVersion\":\"12.0.0.967\",\"HostAssetValue\":1,\"HostCloudType\":\"amazon\",\"HostGUID\":\"1111111-08CF-4541-01333-11901F731111109\",\"HostGroupID\":71,\"HostGroupName\":\"private_subnet_ap-southeast-1a (subnet-03160)\",\"HostID\":85,\"HostInstanceID\":\"i-0665c\",\"HostLastIPUsed\":\"192.168.43.1\",\"HostOS\":\"Ubuntu Linux 18 (64 bit) (4.15.0-1051-aws)\",\"HostOwnerID\":\"1111112411\",\"HostSecurityPolicyID\":1,\"HostSecurityPolicyName\":\"Base Policy\",\"Hostname\":\"ec2-11-11-51-45.ap-southeast-3.compute.amazonaws.com (ls-ec2-as1-1b-datalos) [i-f661111148a3f6]\",\"LogDate\":\"2020-07-08T11:52:38.000Z\",\"OSSEC_Action\":\"\",\"OSSEC_Command\":\"\",\"OSSEC_Data\":\"\",\"OSSEC_Description\":\"Non standard syslog message (size too large)\",\"OSSEC_DestinationIP\":\"\",\"OSSEC_DestinationPort\":\"\",\"OSSEC_DestinationUser\":\"\",\"OSSEC_FullLog\":\"Jul 8 11:52:37 ip-172-96-50-2 amazon-ssm-agent.amazon-ssm-agent[24969]: \\\"Document\\\": \\\"{\\\\n \\\\\\\"schemaVersion\\\\\\\": \\\\\\\"2.0\\\\\\\",\\\\n \\\\\\\"description\\\\\\\": \\\\\\\"Software Inventory Policy Document.\\\\\\\",\\\\n \\\\\\\"parameters\\\\\\\": {\\\\n \\\\\\\"applications\\\\\\\": {\\\\n \\\\\\\"type\\\\\\\": \\\\\\\"String\\\\\\\",\\\\n \\\\\\\"default\\\\\\\": \\\\\\\"Enabled\\\\\\\",\\\\n \\\\\\\"description\\\\\\\": \\\\\\\"(Optional) Collect data for installed applications.\\\\\\\",\\\\n \\\\\\\"allowedValues\\\\\\\": [\\\\n \\\\\\\"Enabled\\\\\\\",\\\\n How can I format this correctly to show in JSON format when searing in searcher header. I'm pretty new to Splunk, hence have less idea on this. My file_monitor > props.conf looks like below [myapp:data:events] pulldown_type=true INDEXED_EXTRACTIONS= json category=Custom description=data disabled=false TRUNCATE=99999
Hello, I have created indexed fields at the time of indexing, then i executed the tstats query, and it's working fine. But when i collect resulted data into summary index using splunk collect comma... See more...
Hello, I have created indexed fields at the time of indexing, then i executed the tstats query, and it's working fine. But when i collect resulted data into summary index using splunk collect command, my tstats is not working on summary index. | tstats latest(result._time) as _time ,values(result.relational_correlationId) as relational_correlationId,values(result.tracePoint) as tracePoint where index="hec_example1" by result.environment,result.businessGroup,result.appName,result.interfaceName,result.correlationId | table _time,tracePoint | collect index="summary_mt"   tstats is not working on summary index(I have configured fields.conf as well)
Our Prod Splunk Cloud was recently upgraded to 7.2.10.1 , while our on-prem DEV and QA environments were upgraded to slightly different version: 7.2.9.1 .  We believed that there should not be a sign... See more...
Our Prod Splunk Cloud was recently upgraded to 7.2.10.1 , while our on-prem DEV and QA environments were upgraded to slightly different version: 7.2.9.1 .  We believed that there should not be a significant  difference between the versions and didn't test the one we were upgrading our cloud environment to.              As a result we experienced a production issue with handling .js & .css scripts due to the difference in versions, which we had to resolve right on the day of the upgrade.    Right now we are in process to prepare the  upgrade our Splunk Cloud and Splunk Enterprise to version 8 and we want to prevent issues we experienced due the lack of testing of the right versions.  Splunk now has different versioning for Cloud and Enterprise. For Cloud it is 8.0.2001, 8.0.2004, etc . For Enterprise  - 8.0.1, 8.0.2 , 8.0.2.1, etc.  We are working with our Splunk support team to figure out how Cloud and Enterprise versions correspond between each other, so we can run better tests. But I  thought  I should ask here too. Are there any official Splunk documentations online about mapping between Splunk Cloud and Splunk Enterprise versions?
Hello, we want to combined two fields by using eval inside the tstats where clause search. Please see my search below | tstats latest(result._time) as _time ,values(result.relational_correlationId)... See more...
Hello, we want to combined two fields by using eval inside the tstats where clause search. Please see my search below | tstats latest(result._time) as _time ,values(result.relational_correlationId) as relational_correlationId,values(result.tracePoint) as tracePoint,values(result.timestamp) as timestamp,values(result.content.businessFields{}.key) as content.businessFields{}.key,values(result.content.businessFields{}.value) as content.businessFields{}.value where index="hec_example1" by result.environment,result.businessGroup,result.appName,result.interfaceName,result.correlationId |rename result.environment as environment,result.businessGroup as businessGroup,result.appName as appName,result.interfaceName as interfaceName,result.correlationId as correlationId| table _time,environment,businessGroup,appName,interfaceName,tracePoint,timestamp,correlationId,content.businessFields{}.key,content.businessFields{}.value   Please help me on this.
Hi everyone,   I need to generate a list with all users in Splunk Enterprise, but I stuck on permissions. I have simple a user (without admin access) and when I tried to make a query to servicesNS... See more...
Hi everyone,   I need to generate a list with all users in Splunk Enterprise, but I stuck on permissions. I have simple a user (without admin access) and when I tried to make a query to servicesNS splunk:8089/servicesNS/admin/search/authentication/users In response, I get "You do not have permissions to access objects of user=admin". Also, I tried to search "index=_audit" and "| rest /services/authentication/users" but without success.   How can I get a list of users in Splunk using a USER account without admin access? Maybe JS or REST can help? Thanks.
Hello I'm having a strange problem on splunk 8 (clean install) and using the "proxyConfig" directive. Inside the server.conf file I configured the following: [ProxyConfig] http_proxy = http://127... See more...
Hello I'm having a strange problem on splunk 8 (clean install) and using the "proxyConfig" directive. Inside the server.conf file I configured the following: [ProxyConfig] http_proxy = http://127.0.0.1:3128 https_proxy = https://127.0.0.1:3128 proxy_rules = * no_proxy = localhost, 127.0.0.1, :: 1 Unfortunately, when I try to install an app via the web interface, the interface crashes immediately after I enter my credentials for downloading the app. After a few seconds he throws me out and the login screen returns. Please help me... Thanks
Hi @gcusello , We have installed and configured Splunk Add on for symantec endpoint protection  successfully. Splunk has started receiving logs (index=symantec) but we can see nothing on its symant... See more...
Hi @gcusello , We have installed and configured Splunk Add on for symantec endpoint protection  successfully. Splunk has started receiving logs (index=symantec) but we can see nothing on its symantec dashboard as it showing No Results found. We restarted splunk but it didn't worked. Please help.  Regards, Rahul
How to sum all the Latest events for the specific field Example: Raw data of the event:   Client=XXXXX,CreationTime=3/19/2020 9:09:36 AM,Version=08_07,NumberOfRequests=1,LastRequestTime=3/19/2020... See more...
How to sum all the Latest events for the specific field Example: Raw data of the event:   Client=XXXXX,CreationTime=3/19/2020 9:09:36 AM,Version=08_07,NumberOfRequests=1,LastRequestTime=3/19/2020 9:09:36 AM,InactiveTimeSpan=0.7 minutes Client=XXXXX,CreationTime=3/19/2020 9:08:50 AM,Version=08_07,NumberOfRequests=46,LastRequestTime=3/19/2020 9:10:17 AM,InactiveTimeSpan=0.0 minutes Client=XXXXX,CreationTime=3/19/2020 9:09:56 AM,Version=08_07,NumberOfRequests=2,LastRequestTime=3/19/2020 9:10:13 AM,InactiveTimeSpan=0.1 minutes Splunk Query Used: index=mds sourcetype=logs host =xxx AND NumberOfRequests | rex field=_raw max_match=0 ",NumberOfRequests=(?P<my_requests>\d+),"| mvexpand my_requests | stats sparkline(sum(my_requests)) as Trend sum(my_requests) as Total, avg(my_requests) as Avg, max(my_requests) as Peak, latest(NumberOfRequests) as Current , latest(_time) as "Last Updated" by host | convert ctime("Last Updated")     As provided in the example there are 3 numberofrequests present in single event, lets say same kind of events with different values in numberofrequests I want to a field which have sum of numberofrequests of latest event Please suggest
Hi all, I have a dashboard where users can add comments to a .csv lookup file.  The comments are only related to the day that they are added.  I would like to be able to clear down the .csv on a dai... See more...
Hi all, I have a dashboard where users can add comments to a .csv lookup file.  The comments are only related to the day that they are added.  I would like to be able to clear down the .csv on a daily basis (around midnight).  Is there a way that I can do this using Splunk to keep all the code in one place? I plan to use the 'collect' command to send the contents to an index prior to removing all the entries in whatever way is possible. I have tried using outputlookup but only succeeded in writing blank lines to the .csv, not overwriting or removing the contents. Thanks
Dear Team, I am karthik from prudential singapore, our Phantom UAT server suddenly goes down. when we attempt to restart the server it says pgbouncer failed, server reboot didn't help. i have past... See more...
Dear Team, I am karthik from prudential singapore, our Phantom UAT server suddenly goes down. when we attempt to restart the server it says pgbouncer failed, server reboot didn't help. i have pasted the error messages below, Could you please check and let me know how to resolve this error.   [frioux03@asgprholupht001 ~]$ dzdo /apps/phantom/bin/stop_daemon.sh all phantom_decided is already stopped phantom_workflowd is already stopped phantom_ingestd is already stopped phantom_actiond is already stopped phantom_clusterd is already stopped [frioux03@asgprholupht001 ~]$ dzdo /apps/phantom/bin/stop_phantom.sh Shutting down all Phantom services Phantom shutdown successful [frioux03@asgprholupht001 ~]$ dzdo /apps/phantom/bin/start_phantom.sh Starting all Phantom services Phantom startup failed: pgbouncer [frioux03@asgprholupht001 ~]$   [342122@asgprholupht001 ~]$ systemctl status pgbouncer.service ● pgbouncer.service - A lightweight connection pooler for PostgreSQL Loaded: loaded (/etc/systemd/system/pgbouncer.service; enabled; vendor preset: disabled) Active: failed (Result: exit-code) since Wed 2020-07-08 10:55:52 UTC; 16min ago Process: 4870 ExecStop=/opt/phantom/bin/stop_pgbouncer.sh $MAINPID (code=exited, status=203/EXEC) Process: 4343 ExecReload=/usr/bin/kill -HUP $MAINPID (code=exited, status=0/SUCCESS) Process: 4704 ExecStart=/usr/bin/pgbouncer -d -q ${BOUNCERCONF} (code=exited, status=0/SUCCESS) Main PID: 4706 (code=exited, status=0/SUCCESS)   thanks karthik
Hello, We're newbie to Splunk app development and using Splunk 7.3.5. We wrote a testing app  based on sample here https://docs.splunk.com/Documentation/Splunk/7.3.5/Viz/Buildandeditforms, with a t... See more...
Hello, We're newbie to Splunk app development and using Splunk 7.3.5. We wrote a testing app  based on sample here https://docs.splunk.com/Documentation/Splunk/7.3.5/Viz/Buildandeditforms, with a time picker and a drop down list which is populated from the base search. We think the app will not do anything except populating the drop down list, until user select both time range and choice in the drop down list.   However, it doesn't work as expected.   When the page is loaded: - The drop down list keeps showing Populating...  for long time (usually the same search returns within 1 minute).     Then shows "Search produced no results" at end. - The searches in panel start to run  when the page is loaded even before any user input.  The search picks the default value in drop down list.   We tried to use full search instead of base search, the app works as expected. We must missed something in the code.    Would anyone please help? Thanks a lot.     <form> <label>WWW Statistics </label> <description>WWW statistics (department, browser information)</description> <search id="baseSearch"> <query> <![CDATA[index=application host="landing.itsc.cuhk.edu.hk" sourcetype=access_combined POST OR GET status<400 | rex field=uri "\/(?<deptcode>[^\/]+)\/" ]]> </query> </search> <fieldset submitButton="false" autoRun="true"> <input type="time"> <label></label> <default> <earliest>-1d@d</earliest> <latest>now</latest> </default> </input> <input type="dropdown" token="d_name" searchWhenChanged="true"> <label>Select a department</label> <search base="baseSearch"> <query> fields deptcode | stats count by deptcode </query> </search> <fieldForLabel>deptcode</fieldForLabel>splu <fieldForValue>deptcode</fieldForValue> <choice value="*">All</choice> <default>*</default> <initialValue>*</initialValue> </input> </fieldset> <row> <panel> <title>Access rate for department $d_name$</title> <chart> <search base="baseSearch"> <query> fields deptcode useragent| search deptcode=$d_name$ | timechart count</query> </search> </chart> </panel> </row> <row> <panel> <title>Time distribution of browser for department $d_name$</title> <chart> <search base="baseSearch"> <query> fields deptcode useragent | search deptcode=$d_name$ | rename useragent as http_user_agent | lookup user_agents http_user_agent | timechart count by ua_family usenull=f useother=f</query> </search> </chart> </panel> </row> <row> <panel> <title>Browser distribution</title> <chart> <search base="searchBase"> <query> fields deptcode useragent | search deptcode=$d_name$ | rename useragent as http_user_agent | lookup user_agents http_user_agent |stats count by ua_family</query> </search> <option name="charting.chart">pie</option> </chart> </panel> <panel> <single> <title>Total access for department $d_name$ between $fromDate$ and $toDate$</title> <search base="baseSearch"> <query> fields deptcode useragent | search deptcode=$d_name$ |stats count </query> <done> <eval token="Tearliest">strftime($baseSearch1.info_min_time$,"%F %T")</eval> <eval token="Tlatest">strftime($baseSearch1.info_max_time$,"%F %T")</eval> <eval token="fromDate">strftime($earliest$,"%Y%m/%d %H:%M:%S")</eval> <eval token="toDate">strftime($latest$, "%Y%m/%d %H:%M:%S")</eval> </done> </search> </single> </panel> </row> </form>  
Hi all, Recently I noticed that some information regarding *nix machines is missing. If in the search I put "source=cpu", i only have events from two machines. It shows much more machines if I put "... See more...
Hi all, Recently I noticed that some information regarding *nix machines is missing. If in the search I put "source=cpu", i only have events from two machines. It shows much more machines if I put "source=ps" or "source=df". I have looked so many *.conf files, looking for a list to configure what machines are monitorized, or where I can blacklist/whitelist IPs to monitor. I have reviewed all the "inputs.conf" files that I found, and I don't see anything anomalous. In addition, I have did some tests with "sh cpu.sh --debug", and this is the message "Not found any of commands [sar mpstat] on this host, quitting" Please, can you tell me where and how I have to configure this? Thanks in advance.
I am wondering if Splunk is able to extrapolate only those events that are available within the Salesforce EventLogFile object. If Event Monitoring is not available in my org, will I only be able to ... See more...
I am wondering if Splunk is able to extrapolate only those events that are available within the Salesforce EventLogFile object. If Event Monitoring is not available in my org, will I only be able to see the login/logout files generally available within the Salesforce "standard" solution or is there some sort of a workouround? Thanks!
Hi, I'm looking for a way to hide a table column in an inline email table while still retaining the ability to use the hidden column in the email notification tokens. I want to use the token $resul... See more...
Hi, I'm looking for a way to hide a table column in an inline email table while still retaining the ability to use the hidden column in the email notification tokens. I want to use the token $result.recipients$ in my "To" field for the email notification, but don't want to display the recipients column in the inline table I'm sending in the email. Is there a way to remove the column while still being able to use the token? I've tried both table and fields commands to display only the columns I want, but they both completely remove the recipients field from the results so it can't be used in the token.. Thanks
When I pivot a particular datamodel, I get this error, "Datamodel 'Splunk_CIM_Validation.Vulnerabilities' had an invalid search, cannot get indexes to search" After inspecting the search.log, I noti... See more...
When I pivot a particular datamodel, I get this error, "Datamodel 'Splunk_CIM_Validation.Vulnerabilities' had an invalid search, cannot get indexes to search" After inspecting the search.log, I noticed these two error messsages. 07-08-2020 20:16:24.484 ERROR AdminManagerValidation - 'undefineduundefined' is not a time string. 07-08-2020 20:16:24.484 ERROR DataModelValidator - 'undefineduundefined' is not a time string. Can someone please help how to fix this issue ?
Greetings,   How to send a multiple alert email?  Ex: I want to send the triggered alert to email A, email B, email C, and so on.. Is it possible to do it, without using the cc?   Thanks in Ad... See more...
Greetings,   How to send a multiple alert email?  Ex: I want to send the triggered alert to email A, email B, email C, and so on.. Is it possible to do it, without using the cc?   Thanks in Advance
Hi Everyone, I am looking for writing tstats sub search in query in tstats where claus. I tried the below way but my query giving No Result. I want write in sub search with tstats from two indexes.... See more...
Hi Everyone, I am looking for writing tstats sub search in query in tstats where claus. I tried the below way but my query giving No Result. I want write in sub search with tstats from two indexes. Query:   | tstats latest(_time),values(relational_correlationId),values(tracePoint),values(timestamp),values(businessKey),values(businessValue)  where [  tstats latest(_time) as _time ,values(relational_correlationId) as relational_correlationId ,values(tracePoint) as tracePoint,values(timestamp) as timestamp,values(content.businessFields{}.key) as businessKey,values(content.businessFields{}.value) as businessValue where index="mulesoft_index" earliest=-10m@m latest=now() by environment,businessGroup,appName,interfaceName,correlationId]   Please help me. Thanks & Reagards, Manikanth    
Hi Team, In general, when we create a Docker container, the logs of that container will be stored in the host machine path /DOCKER_PATH/docker-data/container/CONTAINER_ID/CONTAINER_ID.json. Now, we ... See more...
Hi Team, In general, when we create a Docker container, the logs of that container will be stored in the host machine path /DOCKER_PATH/docker-data/container/CONTAINER_ID/CONTAINER_ID.json. Now, we are using splunk-docker-logging-plugin, after implementing splunk-docker-logging-plugin, the logs file /DOCKER_PATH/docker-data/container/CONTAINER_ID/CONTAINER_ID.json itself is not getting created. The logs are directly pushed to Splunk server but the logs are not getting stored in the container log file(/DOCKER_PATH/docker-data/container/CONTAINER_ID/CONTAINER_ID.json) in the Docker host machine. So can you please confirm whether we can store logs in both the places - 1) Forwarding to Splunk server and 2) Storing the logs in /DOCKER_PATH/docker-data/container/CONTAINER_ID/CONTAINER_ID.json till the container is alive.