All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I wish to start using TLS and mutual authentication for my forwarders. I'm running Splunk Enterprise  8.2.4 on Windows 2019 with a mix of windows and Linux forwarders all on 8.2.5 I have my indexers ... See more...
I wish to start using TLS and mutual authentication for my forwarders. I'm running Splunk Enterprise  8.2.4 on Windows 2019 with a mix of windows and Linux forwarders all on 8.2.5 I have my indexers using our PKI signed certificates and the forwarders are currently using the default UF certificate which is obviously not good. As such, I can get certificates issues from our PKI for the UFs but reading this guide it does not indicate what the key usage or enhanced key usage for the cert should be? I would have assumed client authentication ? Looking at the default certificates these attributes are not specified. 
Let's say I have a search and a very basic lookup table (csv). What I want to achieve is to use the values in the table for my search. So my table.csv: id name 1 first 2 s... See more...
Let's say I have a search and a very basic lookup table (csv). What I want to achieve is to use the values in the table for my search. So my table.csv: id name 1 first 2 second 3 third   Now, I want to simply run a query like which returns every single log that has any of the id's from my lookup table. index=myIndex sourcetype=mySourcetype id IN somelookup ---- where id is in table.csv's id column.   The second challenge then would be to actually have the name column values added as a field to the results for clarity.
Does anyone know of a way to reverse the order of the automatic start/end values used for bucket creation when working with timechart (or other similar commands)? For example, if I have a timechart w... See more...
Does anyone know of a way to reverse the order of the automatic start/end values used for bucket creation when working with timechart (or other similar commands)? For example, if I have a timechart with a span of 7 days and a search window of 30 days, the 7-day-buckets start with the oldest data and move forward to the most recent results, resulting in the most recent bucket having a small sample-size; maybe only two days, or three days, rather than the full seven. It seems reasonable to me that most people would care about the more recent data and less about the oldest (incomplete) data and therefore would have an option to adjust this behavior and choose to start at now() and work backwards rather than earliest() and working forward. Am I missing something? Any tips or tricks? I asked support and was basically told this is by design with no real workaround. That doesn't seem right to me. Oh, and again, this is not necessarily limited to timechart, but that's where I run into this frustration the most and where I imagine it is the . 
Hi Team, We have recently configured and ingested the Azure Active Directory Logs into Splunk. Hence we have installed the "Splunk Add-on for Microsoft Office 365" in our Heavy Forwarder server and... See more...
Hi Team, We have recently configured and ingested the Azure Active Directory Logs into Splunk. Hence we have installed the "Splunk Add-on for Microsoft Office 365" in our Heavy Forwarder server and followed the below documentation process as provided. https://blog.avotrix.com/to-collect-ad-azure-logs-to-splunk/ In our Add-On we have provided the Tenant details i.e. Tenant ID and Client ID and post which I have created the inputs --> Add Inputs "Management Activity" and provided the requested details and saved it. Then the logs were getting ingested into Splunk as desired and we are getting the relevant fields as well with required data. But one important field is missing that is the "Application Name" . So we want the Application Name field in which the user had logged on so that it will be really helpful for analysis. But the field is available in Azure AD whereas not in the Logs ingested into Splunk.  We can see the below fields are getting extracted but not the "Application Name" field and moreover in the raw logs also the field is not present. So how to get those field also ingested into Splunk as well. Sample List of fields which are getting extracted automatically. ActorContextId ActorIpAddress Actor{}.ID Actor{}.Type ApplicationId AzureActiveDirectoryEventType ClientIP CreationTime DeviceProperties{}.Name DeviceProperties{}.Value ErrorNumber ExtendedProperties{}.Name ExtendedProperties{}.Value Id InterSystemsId IntraSystemId LogonError ModifiedProperties{}.Name ModifiedProperties{}.NewValue ModifiedProperties{}.OldValue ObjectId Operation So kindly help to check on how to extract the Application Field.
I am trying to write an add-on to eliminate some values in a specific field by plugging in a file containing props.conf and transforms.conf into the splunk/etc/apps directory but failed to get any re... See more...
I am trying to write an add-on to eliminate some values in a specific field by plugging in a file containing props.conf and transforms.conf into the splunk/etc/apps directory but failed to get any result.  Please give me some advice, my configuration files are as follows: props.conf:  [source: path to the log file] TRANSFORMS-elim= elimValue transforms.conf: [elimValue] REGEX=^(type=[A-T]+) DEST_KEY=queue FORMAT=nullQueue Grateful for any help, thanks.  
hello I use 2 similar searc In the first I timechart the results   | bin _time span=1h | stats count as Pb by tutu _time | search Pb > 1 | timechart span=1h dc(tutu)   and... See more...
hello I use 2 similar searc In the first I timechart the results   | bin _time span=1h | stats count as Pb by tutu _time | search Pb > 1 | timechart span=1h dc(tutu)   and in the second I stats the results   | bin _time span=1h | stats count as Pb by tutu _time | search Pb > 1    What I dont understand is that the result of the timechart command (screen 1) for a specific hour (18h for example) is different (I just have one event) than the result of the stats command (screen 2) because I have 7 events screen 1 screen 2 thanks
hi all, i have multiple users and multiple dashboard studio. i want to check which user is downloading which dashboard. how can i do that ?  
hello I use a transpose command in a table panel     | eval time=strftime(_time,"%H:%M") | sort time | fields - _time _span _origtime | transpose 0 header_field=time column_name=KPI includ... See more...
hello I use a transpose command in a table panel     | eval time=strftime(_time,"%H:%M") | sort time | fields - _time _span _origtime | transpose 0 header_field=time column_name=KPI include_empty=true      But randomly, instead having the time field in the header, I have row1, row2, row3.... what is wrong please?  
 I am using below query to fill in 0 for dates when we have missing value and get those dates on the chart. But this is not working . Could anyone please help me here.     base search  | eval t... See more...
 I am using below query to fill in 0 for dates when we have missing value and get those dates on the chart. But this is not working . Could anyone please help me here.     base search  | eval timestamp_epoc = strptime(timestamp,"%Y-%m-%dT%H:%M:%S.%3N%Z") | eval date_picker = strftime(timestamp_epoc,"%Y-%m-%d") | search requestURI="/api/v1/home/reseller/*" | eval hqid = substr(requestURI,23,10) | search $hqid$ | eval status_success=if(httpStatus="200",1,0) | eval status_fail= if(httpStatus != "200",1,0) | stats sum(status_success) as status_success, sum(status_fail) as status_fail by hqid,date_picker | eval status = case( (status_fail>0 AND status_success>0), "Multiple successful logins", (status_fail>0), "Multiple failed logins", (status_success>0), "Successful logins",1=1, "Other") |  fillnull value=0 date_picker hqid  status | chart count(hqid) by date_picker,status  
How would you return the count of only the Reachable devices? In the picture above you would return 8. When using the query below I receive a result of 0 | stats count("SWITCHES_AND_HUBS{}.rea... See more...
How would you return the count of only the Reachable devices? In the picture above you would return 8. When using the query below I receive a result of 0 | stats count("SWITCHES_AND_HUBS{}.reachability_status"=REACHABLE)
New to splunk, need your help. Data: 4/5/2022 9:02 PM | Audit | hi user | something.MoveFiles | Copied File from C:\hello.txt to server/something.txt 4/5/2022 9:02 AM | Audit | hi user | something.... See more...
New to splunk, need your help. Data: 4/5/2022 9:02 PM | Audit | hi user | something.MoveFiles | Copied File from C:\hello.txt to server/something.txt 4/5/2022 9:02 AM | Audit | hi user | something.MoveFiles | Copied File from D:\reportsSuccess\_CMS.txt to \\server_CMS.txt 12/15/2022 10:02 PM | Audit | hi user | something.MoveFiles | Copied File from D:\reportsSuccess\_CMS.txt to \\server_CMS.txt 4/4/2022 5:00 AM | Audit | hi user | FileSplitter.ProcessFiles | Started Processing : ID 4/4/2022 5:00 AM | Audit | hi user | FileSplitter.ProcessFiles | Started Processing 4/4/2022 5:00 AM | Audit | hi user | FileReader.FileReader | FileReader for D:\reportsInput\tsst.TXT initilized 4/4/2022 5:00 AM | Audit | hi user | something.something. 11/4/2022 5:00 AM | Audit | hi user | something.something. 10/4/2021 5:00 AM | Audit | hi user | something.something.   12/15/2022 is taking as 2/15/22. Below is the props.conf am using it.   SHOULD_LINEMERGE=true LINE_BREAKER=([/r/n]*)[0-9]+\/[0-9]+\/\d{4}\s[0-9]+:[0-9]+\s[P|A]M TZ=EST TIME_PREFIX =^ BREAK_ONLY_BEFORE=[0-9]+\/[0-9]+\/\d{4}\s[0-9]+:[0-9]+\s[P|A]M Can you please help me to get the correct parsing?  Thanks in advance.    
Hello,  I looking for options to add a non-existing field in tstats command. The scenario is the field doesn't exist. Normally I create regex for searches, however, it doesn't work similar with tst... See more...
Hello,  I looking for options to add a non-existing field in tstats command. The scenario is the field doesn't exist. Normally I create regex for searches, however, it doesn't work similar with tstats. Example Query: index=something sourcetype=something:something | rex field=source".....(?<new_field>[0-9A-Z]+)" This command will create new_field  field based on source field. For tstats, the idea should be..  | tstats count max(_time) as _time where ....     Is this possible? Sorry for the lack of details.
Hello Good Day, Sorry for this very noob question, Is there a way that I can put to single value panel in a table using in-line css  without referencing it to a css and  js file ? For exa... See more...
Hello Good Day, Sorry for this very noob question, Is there a way that I can put to single value panel in a table using in-line css  without referencing it to a css and  js file ? For example:   makeresults will do to give me an idea. Thanks
I have an search where I need to find the average of the last three bins. Example: On my time filter I select an range of 10:00 - 10:30. I need to find the average of ONLY the first three bins 581, 6... See more...
I have an search where I need to find the average of the last three bins. Example: On my time filter I select an range of 10:00 - 10:30. I need to find the average of ONLY the first three bins 581, 698, and 247. How can I create a search that does this? On this dashboard I use an time picker so the search would need to be dynamic, as there would be new time inputs. _time Count 10:00 581 10:05 698 10:10 247 10:15 987 10:20 365 10:30 875
Hi, I am exploring some options for exporting data into text file from Splunk. I have a scheduled saved search which produces results like below in statistical table format. I need this to be writt... See more...
Hi, I am exploring some options for exporting data into text file from Splunk. I have a scheduled saved search which produces results like below in statistical table format. I need this to be written to a .txt file. Results written need to be appended to existing txt file.   count      index      sourcetype                      time                                               results  0                   A                      B               04/05/2022 00:00:00         Success exceeds Failures    Thanks in-advance!!!!!!
Hello, I have a log file where the date is at the top of the log and the time for each event is at the start of each line, so something like this: -- Log Continued 03/28/2022 00:00:00.471 -- 00:00... See more...
Hello, I have a log file where the date is at the top of the log and the time for each event is at the start of each line, so something like this: -- Log Continued 03/28/2022 00:00:00.471 -- 00:00:36.526 xxxxx 00:04:01.809 xxxxx 00:04:09.267 xxxxx 00:10:19.039 xxxxx How would I extract the date/ time using props.conf or similar?
I'm wondering if Splunk can ingest data from Salesforce Objects (Account, Contact, Opportunity, etc) and use Splunk to create something akin to Salesforce reports --> ie: write a search that returns ... See more...
I'm wondering if Splunk can ingest data from Salesforce Objects (Account, Contact, Opportunity, etc) and use Splunk to create something akin to Salesforce reports --> ie: write a search that returns all Accounts where the annual revenue field value is greater than X If this is possible, can someone please point me in the right direction to do this? Or is Splunk only used to query event logs in Salesforce and not sObject data?
We have a cloud instance of Splunk and a vendor whose forwarders we do not control sending data to our instance. I am trying to extract fields from their data but their sourcetypes are large alpha-nu... See more...
We have a cloud instance of Splunk and a vendor whose forwarders we do not control sending data to our instance. I am trying to extract fields from their data but their sourcetypes are large alpha-numeric values and there are 100+ for just the Audit log (ex. 812b245d-1da3-43a5-a6f8-0fbdc4f9286cAudit-too_small)  This is making field extraction difficult to perform. How can I rename the sourcetype on these without involving our vendor (who is very Splunk illiterate) at the point of ingest so that I can perform field extractions? The sourcetype rename utility within Splunk seems to work but with over 100+ such sourcetypes this method is rather unwieldy and I am looking for a cleaner method. Much thanks
Hi,  I'm getting these errors in splunkd.log each time the query is executed. 04-05-2022 18:01:48.750 +0100 ERROR ExecProcessor [8917 ExecProcessorSchedulerThread] - message from "/opt/splunk/etc/a... See more...
Hi,  I'm getting these errors in splunkd.log each time the query is executed. 04-05-2022 18:01:48.750 +0100 ERROR ExecProcessor [8917 ExecProcessorSchedulerThread] - message from "/opt/splunk/etc/apps/splunk_app_db_connect/linux_x86_64/bin/dbxquery.sh" 17:01:48.750 [metrics-logger-reporter-1-thread-1] INFO com.splunk.dbx.connector.health.impl.ConnectionPoolMetricsLogReporter - type=TIMER, name=unnamed_pool_-382175356_jdbc__jtds__sqlserver__//servername__212/table;useCursors__true;domain__xxx.com;useNTLMv2__true.pool.Wait, count=12, min=0.120249, max=36.824436, mean=1.0705702234360484, stddev=0.028345392065423972, p50=1.06918, p75=1.06918, p95=1.06918, p98=1.06918, p99=1.06918, p999=1.648507, m1_rate=2.79081711035706E-30, m5_rate=1.1687825901066073E-8, m15_rate=2.6601992470705972E-5, mean_rate=5.566605761092861E-4, rate_unit=events/second, duration_unit=milliseconds 04-05-2022 18:01:48.750 +0100 ERROR ExecProcessor [8917 ExecProcessorSchedulerThread] - message from "/opt/splunk/etc/apps/splunk_app_db_connect/linux_x86_64/bin/dbxquery.sh" 17:01:48.750 [metrics-logger-reporter-1-thread-1] INFO c.s.d.c.h.i.ConnectionPoolMetricsLogReporter - type=TIMER, name=unnamed_pool_-382175356_jdbc__jtds__sqlserver__//servername__212/mantis;useCursors__true;domain__xxx.com;useNTLMv2__true.pool.Wait, count=12, min=0.120249, max=36.824436, mean=1.0705702234360484, stddev=0.028345392065423972, p50=1.06918, p75=1.06918, p95=1.06918, p98=1.06918, p99=1.06918, p999=1.648507, m1_rate=2.79081711035706E-30, m5_rate=1.1687825901066073E-8, m15_rate=2.6601992470705972E-5, mean_rate=5.566605761092861E-4, rate_unit=events/second, duration_unit=milliseconds Unfortunately I can see nothing pertaining to what the actual error is.  If I use SQL Explorer, I can connect and pull data back without issue.  However, the data that is collected is very sporadic if at all. We have a second DB connection running the same query etc without issue. We're using Splunk 8.2.3.2 and db_connect 3.7.0 TIA Steve
Looking splunk function or query to change timestamp of  "_time" field in local timestamp. when we present statistical table of data with time field then that time field value should converted to lo... See more...
Looking splunk function or query to change timestamp of  "_time" field in local timestamp. when we present statistical table of data with time field then that time field value should converted to local time irrespective of location where query are getting executed. EX:- time Message ID Sender Recipient Subject MessageSize AttachmentName dAttachmentName FilterAction FinalRule TLS Version 4/5/22 9:01 <DM5P102MB0126B6CF54A6B2F44B6F6BF295E49@DM5P102MB0126.NAMP102.PROD.OUTLOOK.COM> Darren_Collishaw@amat.com tobycollishaw@hotmail.com Courses - Youtube 15201 text.txt text.html   continue outbound_clean TLSv1.2   "timestamp" column  in above example should get changed according to local time zone when we execute query.