All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello,  I looking for options to add a non-existing field in tstats command. The scenario is the field doesn't exist. Normally I create regex for searches, however, it doesn't work similar with tst... See more...
Hello,  I looking for options to add a non-existing field in tstats command. The scenario is the field doesn't exist. Normally I create regex for searches, however, it doesn't work similar with tstats. Example Query: index=something sourcetype=something:something | rex field=source".....(?<new_field>[0-9A-Z]+)" This command will create new_field  field based on source field. For tstats, the idea should be..  | tstats count max(_time) as _time where ....     Is this possible? Sorry for the lack of details.
Hello Good Day, Sorry for this very noob question, Is there a way that I can put to single value panel in a table using in-line css  without referencing it to a css and  js file ? For exa... See more...
Hello Good Day, Sorry for this very noob question, Is there a way that I can put to single value panel in a table using in-line css  without referencing it to a css and  js file ? For example:   makeresults will do to give me an idea. Thanks
I have an search where I need to find the average of the last three bins. Example: On my time filter I select an range of 10:00 - 10:30. I need to find the average of ONLY the first three bins 581, 6... See more...
I have an search where I need to find the average of the last three bins. Example: On my time filter I select an range of 10:00 - 10:30. I need to find the average of ONLY the first three bins 581, 698, and 247. How can I create a search that does this? On this dashboard I use an time picker so the search would need to be dynamic, as there would be new time inputs. _time Count 10:00 581 10:05 698 10:10 247 10:15 987 10:20 365 10:30 875
Hi, I am exploring some options for exporting data into text file from Splunk. I have a scheduled saved search which produces results like below in statistical table format. I need this to be writt... See more...
Hi, I am exploring some options for exporting data into text file from Splunk. I have a scheduled saved search which produces results like below in statistical table format. I need this to be written to a .txt file. Results written need to be appended to existing txt file.   count      index      sourcetype                      time                                               results  0                   A                      B               04/05/2022 00:00:00         Success exceeds Failures    Thanks in-advance!!!!!!
Hello, I have a log file where the date is at the top of the log and the time for each event is at the start of each line, so something like this: -- Log Continued 03/28/2022 00:00:00.471 -- 00:00... See more...
Hello, I have a log file where the date is at the top of the log and the time for each event is at the start of each line, so something like this: -- Log Continued 03/28/2022 00:00:00.471 -- 00:00:36.526 xxxxx 00:04:01.809 xxxxx 00:04:09.267 xxxxx 00:10:19.039 xxxxx How would I extract the date/ time using props.conf or similar?
I'm wondering if Splunk can ingest data from Salesforce Objects (Account, Contact, Opportunity, etc) and use Splunk to create something akin to Salesforce reports --> ie: write a search that returns ... See more...
I'm wondering if Splunk can ingest data from Salesforce Objects (Account, Contact, Opportunity, etc) and use Splunk to create something akin to Salesforce reports --> ie: write a search that returns all Accounts where the annual revenue field value is greater than X If this is possible, can someone please point me in the right direction to do this? Or is Splunk only used to query event logs in Salesforce and not sObject data?
We have a cloud instance of Splunk and a vendor whose forwarders we do not control sending data to our instance. I am trying to extract fields from their data but their sourcetypes are large alpha-nu... See more...
We have a cloud instance of Splunk and a vendor whose forwarders we do not control sending data to our instance. I am trying to extract fields from their data but their sourcetypes are large alpha-numeric values and there are 100+ for just the Audit log (ex. 812b245d-1da3-43a5-a6f8-0fbdc4f9286cAudit-too_small)  This is making field extraction difficult to perform. How can I rename the sourcetype on these without involving our vendor (who is very Splunk illiterate) at the point of ingest so that I can perform field extractions? The sourcetype rename utility within Splunk seems to work but with over 100+ such sourcetypes this method is rather unwieldy and I am looking for a cleaner method. Much thanks
Hi,  I'm getting these errors in splunkd.log each time the query is executed. 04-05-2022 18:01:48.750 +0100 ERROR ExecProcessor [8917 ExecProcessorSchedulerThread] - message from "/opt/splunk/etc/a... See more...
Hi,  I'm getting these errors in splunkd.log each time the query is executed. 04-05-2022 18:01:48.750 +0100 ERROR ExecProcessor [8917 ExecProcessorSchedulerThread] - message from "/opt/splunk/etc/apps/splunk_app_db_connect/linux_x86_64/bin/dbxquery.sh" 17:01:48.750 [metrics-logger-reporter-1-thread-1] INFO com.splunk.dbx.connector.health.impl.ConnectionPoolMetricsLogReporter - type=TIMER, name=unnamed_pool_-382175356_jdbc__jtds__sqlserver__//servername__212/table;useCursors__true;domain__xxx.com;useNTLMv2__true.pool.Wait, count=12, min=0.120249, max=36.824436, mean=1.0705702234360484, stddev=0.028345392065423972, p50=1.06918, p75=1.06918, p95=1.06918, p98=1.06918, p99=1.06918, p999=1.648507, m1_rate=2.79081711035706E-30, m5_rate=1.1687825901066073E-8, m15_rate=2.6601992470705972E-5, mean_rate=5.566605761092861E-4, rate_unit=events/second, duration_unit=milliseconds 04-05-2022 18:01:48.750 +0100 ERROR ExecProcessor [8917 ExecProcessorSchedulerThread] - message from "/opt/splunk/etc/apps/splunk_app_db_connect/linux_x86_64/bin/dbxquery.sh" 17:01:48.750 [metrics-logger-reporter-1-thread-1] INFO c.s.d.c.h.i.ConnectionPoolMetricsLogReporter - type=TIMER, name=unnamed_pool_-382175356_jdbc__jtds__sqlserver__//servername__212/mantis;useCursors__true;domain__xxx.com;useNTLMv2__true.pool.Wait, count=12, min=0.120249, max=36.824436, mean=1.0705702234360484, stddev=0.028345392065423972, p50=1.06918, p75=1.06918, p95=1.06918, p98=1.06918, p99=1.06918, p999=1.648507, m1_rate=2.79081711035706E-30, m5_rate=1.1687825901066073E-8, m15_rate=2.6601992470705972E-5, mean_rate=5.566605761092861E-4, rate_unit=events/second, duration_unit=milliseconds Unfortunately I can see nothing pertaining to what the actual error is.  If I use SQL Explorer, I can connect and pull data back without issue.  However, the data that is collected is very sporadic if at all. We have a second DB connection running the same query etc without issue. We're using Splunk 8.2.3.2 and db_connect 3.7.0 TIA Steve
Looking splunk function or query to change timestamp of  "_time" field in local timestamp. when we present statistical table of data with time field then that time field value should converted to lo... See more...
Looking splunk function or query to change timestamp of  "_time" field in local timestamp. when we present statistical table of data with time field then that time field value should converted to local time irrespective of location where query are getting executed. EX:- time Message ID Sender Recipient Subject MessageSize AttachmentName dAttachmentName FilterAction FinalRule TLS Version 4/5/22 9:01 <DM5P102MB0126B6CF54A6B2F44B6F6BF295E49@DM5P102MB0126.NAMP102.PROD.OUTLOOK.COM> Darren_Collishaw@amat.com tobycollishaw@hotmail.com Courses - Youtube 15201 text.txt text.html   continue outbound_clean TLSv1.2   "timestamp" column  in above example should get changed according to local time zone when we execute query.
hello all, I am trying to figure out why my iplocation report isnt providing the city,country under statistics. Below is my search that is providing the IP field in the table but the other two colu... See more...
hello all, I am trying to figure out why my iplocation report isnt providing the city,country under statistics. Below is my search that is providing the IP field in the table but the other two columns are blank. Any assistance would be great here.  index=wineventlog EventCode=4624 | search src_ip="*" ComputerName="*" user="*" | eval "Source IP" = coalesce(src_ip,"") | eval clientip=src_ip | iplocation allfields=false "Source IP" | table "Source IP", city, country
Hello, We have Splunk on Linux servers. Has anyone installed the Falcon Sensor from Crowdstike on their Linux servers that host Splunk? Crowdstrike is a next-gen antivirus solution.   Any iss... See more...
Hello, We have Splunk on Linux servers. Has anyone installed the Falcon Sensor from Crowdstike on their Linux servers that host Splunk? Crowdstrike is a next-gen antivirus solution.   Any issues or unforeseen consequences?   Thanks,    
Trying to use a token in a rex, but can't. I'm setting the token as follows (token_keywords_mv is a mv): <set token="token_rex">mvjoin(mvmap('token_keywords_mv',"(?&gt;".'token_keywords_mv'."&lt;".'... See more...
Trying to use a token in a rex, but can't. I'm setting the token as follows (token_keywords_mv is a mv): <set token="token_rex">mvjoin(mvmap('token_keywords_mv',"(?&gt;".'token_keywords_mv'."&lt;".'token_keywords_mv'."+?)"), "|")</set> When I use it in a rex command, it turns verbose: ... | rex field=_raw '(?i)$token_rex$'  It gives me the following error: Error in 'rex' command: Encountered the following error while compiling the regex ''(?i)mvjoin(mvmap('token_keywords_mv'': Regex: missing closing parenthesis.    When I set the token as the results of the eval, as in the following, it works: <set token="token_rex">(?&lt;lorem&gt;lorem+?)|(?&lt;ipsum&gt;ipsum+?)|(?&lt;situs&gt;situs+?)</set>
Which Add-on is best for Sophos NG Firewall Data out of the following on the Splunkbase for CIM mapping https://splunkbase.splunk.com/app/6187/ https://splunkbase.splunk.com/app/4543/ https:/... See more...
Which Add-on is best for Sophos NG Firewall Data out of the following on the Splunkbase for CIM mapping https://splunkbase.splunk.com/app/6187/ https://splunkbase.splunk.com/app/4543/ https://splunkbase.splunk.com/app/5378/  
Hello,  Thanks for taking the time to read/consider my question!  I'm working on reducing the overhead for Windows Event Logs that we are bringing in via UFs sitting on Windows workstations & se... See more...
Hello,  Thanks for taking the time to read/consider my question!  I'm working on reducing the overhead for Windows Event Logs that we are bringing in via UFs sitting on Windows workstations & servers by trimming some of the redundant text at the end of each log using a props.conf file located within /etc/system/local on each heavy forwarder.  My understanding was that if you placed a props.conf on the heavy forwarder it would effectively filter out the messages being sent to Splunk cloud, but I'm starting to think that props.conf isn't read until the indexing tier.  My question is this, if I need to keep indexAndForward=false on my heavy forwarders to avoid the licensing and overhead, how can I apply props.conf to filter events before Splunk cloud? Do I need to submit a support ticket for them to place the props.conf within the cloud-based indexers? Many thanks in advance  
Hi Splunkers, in our environment we use Splunk DB connect. When we configure a new connection, on differents DBs we are facing the following error:     The error is self explaining and i... See more...
Hi Splunkers, in our environment we use Splunk DB connect. When we configure a new connection, on differents DBs we are facing the following error:     The error is self explaining and it will be very clear for us in a normal condition, but what sounds strange is: 1. When we confgure the connection, we do NOT use encryption; we clearely disable encryption and flag the read only parameter 2. The error rise up with many DB of the same network, but not all. 3. It has been ensured by DB team that no encryption has been set on DB servers of that network. So, my wonder is: why we face a TLS error if we are telling to SPlunk to not use it? And how to solve it?
Hello When I try to install the cluster-agent-opeartor.yaml I get the error. Error from server (Forbidden): error when creating "cluster-agent-operator.yaml": roles.rbac.authorization.k8s.io is for... See more...
Hello When I try to install the cluster-agent-opeartor.yaml I get the error. Error from server (Forbidden): error when creating "cluster-agent-operator.yaml": roles.rbac.authorization.k8s.io is forbidden: User "j" cannot create resource "roles" in API group "rbac.authorization.k8s.io" in the namespace "appdynamics": requires one of ["container.roles.create"] permission(s). Thanks.
  I am not sure of how to set the BREAK_ONLY_BEFORE I have tried the below setting.. all my logs are of log4j format and starts at [2022-04-05 11:18:23,839] format BREAK_ONLY_BEFORE: date... See more...
  I am not sure of how to set the BREAK_ONLY_BEFORE I have tried the below setting.. all my logs are of log4j format and starts at [2022-04-05 11:18:23,839] format BREAK_ONLY_BEFORE: date My logs are  which are send to splunk through fluentd in as different events: [2022-04-05 11:18:23,839] WARN Error while loading: connectors-versions.properties (com.amadeus.scp.kafka.connect.utils.Version) java.lang.NullPointerException at java.util.Properties$LineReader.readLine(Properties.java:434) at java.util.Properties.load0(Properties.java:353) at java.util.Properties.load(Properties.java:341) at com.amadeus.scp.kafka.connect.utils.Version.<clinit>(Version.java:47) at com.amadeus.scp.kafka.connect.connectors.kafka.source.router.K2KRouterSourceConnector.version(K2KRouterSourceConnector.java:62) at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.versionFor(DelegatingClassLoader.java:380) at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.versionFor(DelegatingClassLoader.java:385) at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.getPluginDesc(DelegatingClassLoader.java:355) at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.scanPluginPath(DelegatingClassLoader.java:328) at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.scanUrlsAndAddPlugins(DelegatingClassLoader.java:261) at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.registerPlugin(DelegatingClassLoader.java:253) at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.initPluginLoader(DelegatingClassLoader.java:222) at org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.initLoaders(DelegatingClassLoader.java:199) at org.apache.kafka.connect.runtime.isolation.Plugins.<init>(Plugins.java:60) at org.apache.kafka.connect.cli.ConnectDistributed.startConnect(ConnectDistributed.java:91) at org.apache.kafka.connect.cli.ConnectDistributed.main(ConnectDistributed.java:78)
I have 3 indexes that I need to join.   One index is the changes that we have in created in our Service Management tool. The second index is the Post Implementation Reviews (PIR's).  The third inde... See more...
I have 3 indexes that I need to join.   One index is the changes that we have in created in our Service Management tool. The second index is the Post Implementation Reviews (PIR's).  The third index links the two tables together. Change Table PIR Table Link Table The SourceId is the change RecId, TargetID is the PIR RecId.  What I would like to do is join the indexes so that I can show the change information and the status of the PIR.  What I have so far displays only the change information. index=index_prod_sql_ism_change * | rename Owner as SamAccountName | lookup AD_Lookup SamAccountName OUTPUT DisplayName, Department |dedup ChangeNumber | table ChangeNumber, Status, TypeOfChange, Priority, DisplayName, OwnerTeam, Category, ScheduledStartDate, ScheduledEndDate | sort ChangeNumber Can someone please assist. Thanks
Hi Everyone, I am getting big single event through a python script from an API containing the performance data from an API but it is not autoextracting all the KV fields and i need to get those det... See more...
Hi Everyone, I am getting big single event through a python script from an API containing the performance data from an API but it is not autoextracting all the KV fields and i need to get those details to get the meaningful data.Also the timestamp is coming in epoch format.Below is the event format :   {'d': {'__count': '0', 'results': [{'ID': '6085', 'Name': 'device1', 'DisplayName': None, 'DisplayDescription': None, 'cpumfs': {'results': [{'ID': '6117', 'Timestamp': '1649157300', 'DeviceItemID': '6085', 'pct_im_Utilization': '4.0'}, {'ID': '6117', 'Timestamp': '1649157600', 'DeviceItemID': '6085', 'pct_im_Utilization': '1.0'}, {'ID': '6117', 'Timestamp': '1649157900', 'DeviceItemID': '6085', 'pct_im_Utilization': '4.0'}, {'ID': '6117', 'Timestamp': '1649158200', 'DeviceItemID': '6085', 'pct_im_Utilization': '1.0'}, {'ID': '6117', 'Timestamp': '1649158500', 'DeviceItemID': '6085', 'pct_im_Utilization': '4.0'}, {'ID': '6117', 'Timestamp': '1649158800', 'DeviceItemID': '6085', 'pct_im_Utilization': '1.0'}, {'ID': '6117', 'Timestamp': '1649159100', 'DeviceItemID': '6085', 'pct_im_Utilization': '4.0'}, {'ID': '6117', 'Timestamp': '1649159400', 'DeviceItemID': '6085', 'pct_im_Utilization': '1.0'}, {'ID': '6117', 'Timestamp': '1649159700', 'DeviceItemID': '6085', 'pct_im_Utilization': '4.0'}, {'ID': '6117', 'Timestamp': '1649160000', 'DeviceItemID': '6085', 'pct_im_Utilization': '1.0'}, {'ID': '6117', 'Timestamp': '1649160300', 'DeviceItemID': '6085', 'pct_im_Utilization': '4.0'}, {'ID': '6117', 'Timestamp': '1649160600', 'DeviceItemID': '6085', 'pct_im_Utilization': '1.0'}]}, 'memorymfs': {'results': [{'ID': '6118', 'Timestamp': '1649157300', 'DeviceItemID': '6085', 'im_Free': '2.809298944E9', 'pct_im_Utilization': '83.0702196963489'}, {'ID': '6118', 'Timestamp': '1649157600', 'DeviceItemID': '6085', 'im_Free': '2.741796864E9', 'pct_im_Utilization': '83.4770099337781'}, {'ID': '6118', 'Timestamp': '1649157900', 'DeviceItemID': '6085', 'im_Free': '2.784014336E9', 'pct_im_Utilization': '83.2225932482694'}, {'ID': '6118', 'Timestamp': '1649158200', 'DeviceItemID': '6085', 'im_Free': '2.739892224E9', 'pct_im_Utilization': '83.4884879350163'}, {'ID': '6118', 'Timestamp': '1649158500', 'DeviceItemID': '6085', 'im_Free': '2.812264448E9', 'pct_im_Utilization': '83.0523485718404'}, {'ID': '6118', 'Timestamp': '1649158800', 'DeviceItemID': '6085', 'im_Free': '2.747793408E9', 'pct_im_Utilization': '83.4408727427832'}, {'ID': '6118', 'Timestamp': '1649159100', 'DeviceItemID': '6085', 'im_Free': '2.808725504E9', 'pct_im_Utilization': '83.0736754386571'}, {'ID': '6118', 'Timestamp': '1649159400', 'DeviceItemID': '6085', 'im_Free': '2.744528896E9', 'pct_im_Utilization': '83.4605457900666'}, {'ID': '6118', 'Timestamp': '1649159700', 'DeviceItemID': '6085', 'im_Free': '2.804084736E9', 'pct_im_Utilization': '83.1016422674804'}, {'ID': '6118', 'Timestamp': '1649160000', 'DeviceItemID': '6085', 'im_Free': '2.740002816E9', 'pct_im_Utilization': '83.4878214704282'}, {'ID': '6118', 'Timestamp': '1649160300', 'DeviceItemID': '6085', 'im_Free': '2.7926528E9', 'pct_im_Utilization': '83.1705349587829'}, {'ID': '6118', 'Timestamp': '1649160600', 'DeviceItemID': '6085', 'im_Free': '2.736328704E9', 'pct_im_Utilization': '83.5099629050747'}]}} In the above event , it is displaying CPU , memory utilization multiple times at different epoch times for each device.I have removed the trailing event containing data for other devices as it was exceeding the forum limit to post.I need to get the utilization data device wise.Please help on this.   Thanks
I am parsing logs using splunk and there are two types of logs : 1. API endpoint info and user ID 2. Logs which contains specific error that I am interested in.(Lets say error is ERROR_FAIL)   I ... See more...
I am parsing logs using splunk and there are two types of logs : 1. API endpoint info and user ID 2. Logs which contains specific error that I am interested in.(Lets say error is ERROR_FAIL)   I need all logs for a particular user hitting endpoint and getting ERROR_FAIL. Both the logs have same request id for one instance of api call. So firstly I want to filter the request ID from point 1, which will give me request id for the api and user I am interested in, and based on that request id ,I wana see all the logs that have failed because of error(ERROR_FAIL). Now If i use following query ,I get all the request ids for user and API: index=app-Prod sourcetype=prod-app-logs "api/rest/v1/entity" " 123" | table xrid   Now if I add this in sub-search. it does not work:Final query   index=app-Prod sourcetype=prod-app-logs  [search index=app-Prod sourcetype=prod-app-logs "api/rest/v1/entity" "123" | table xrid]  "ERROR_FAIL"  |  table xrid   This does not return anything. There are logs where 123 user hits "api/rest/v1/entity" and gets "ERROR_FAIL".How can i make my query correct?