All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Is there a way to test index-time operations without indexing logs? For example, is there a way I can provide a sample log file and see what the timestamp, host, sourcetype, source, and output after ... See more...
Is there a way to test index-time operations without indexing logs? For example, is there a way I can provide a sample log file and see what the timestamp, host, sourcetype, source, and output after other operations like null-queuing would be? For example, I currently use the "Add Data" section to test timestamping and line-breaking, but this doesn't show other metadata or what will be ingested after null-queuing. I also setup a quick bash command to make copies of the base log samples and have inputs continuously monitor the new files as I'm testing new sourcetypes. I feel like this is a bit inefficient. Thanks in advance for any input!
hi sorry for this question but I have difficulties to understand why a by clause with 3 conditions retrieve less events than a clause with 2 events Is the difference is on the field _time?   ... See more...
hi sorry for this question but I have difficulties to understand why a by clause with 3 conditions retrieve less events than a clause with 2 events Is the difference is on the field _time?     | stats count as PbPerf by _time toto tutu | stats count as PbPerf by _time toto      thanks
Hello All, I am trying to create chart table with the below data, I have the table sorted with Month Name from descending order(rolling 13 months) with below search, but i am looking to have month(... See more...
Hello All, I am trying to create chart table with the below data, I have the table sorted with Month Name from descending order(rolling 13 months) with below search, but i am looking to have month(rolling 13 months) sorted in descending order in x axis, i used Chart in search but with no luck, i have spent lot of time trying different ways but no luck. I have added what i have tried, sample input and desired output Request you to kindly help me here. Below is the Base search where month field is derived after several appends and  summarized by stats, it is in string format.     | table ex_p month id Value | chart values(Value) as final_value by ex_p,id |sort - id ----------------------------------------------------- | table ex_p month id Value | sort - id | chart values(Value) as final_value by ex_p,month     SAMPLE INPUT ex_p month id Value P1 Apr-22 202204 10 | 10% P2 Apr-22 202204 20 | 15% P3 Apr-22 202204 100 | 60% P4 Apr-22 202204 27 | 100% R P1 Apr-22 202204 12 | 45% R P2 Apr-22 202204 36 | 89% R P3 Apr-22 202204 16 | 30% R P4 Apr-22 202204 28 | 65% P1 Mar-22 202203 90 | 90% P2 Mar-22 202203 57 | 120% P3 Mar-22 202203 18 | 125% P4 Mar-22 202203 76 | 76% R P1 Mar-22 202203 80 | 70% R P2 Mar-22 202203 78 | 99% R P3 Mar-22 202203 97 | 85% R P4 Mar-22 202203 08 | 09% … … … … … … … … … … … … RP4 21-Apr 202104 10 | 110% Required OUTPUT ex_p Apr-22 Mar-22 … 21-Apr P1 10 | 10% 90 | 90% … … P2 20 | 15% 57 | 120% … … P3 100 | 60% 18 | 125% … … P4 27 | 100% 76 | 76% … … R P1 12 | 45% 80 | 70% … … R P2 36 | 89% 78 | 99% … … R P3 16 | 30% 97 | 85% … … R P4 28 | 65% 08 | 09% … …
This has been asked before, and the questions seems to die. So here I am with a slightly different use case/phrasing. Dearest Splunk Devs, please let me use environmental variables in my configs. ... See more...
This has been asked before, and the questions seems to die. So here I am with a slightly different use case/phrasing. Dearest Splunk Devs, please let me use environmental variables in my configs. Issue: I have several heavy forwarders collecting logs from different endpoints. My users need to know which heavy forwarder the logs passed through.  I want to add the Heavy Forwarder's hostname to the log as "collector" Current situaiton: transforms.conf   [addmeta] REGEX = . FORMAT = collector::$HOSTNAME WRITE_META = true   props.conf   [generic_single_line] TRANSFORMS-addmeta = addmeta   This results in the unfortunate log:   4/6/22 1:01:17.000 PM testing my props.conf with a simple log collector = $HOSTNAME sourcetype = generic_single_line   But what SHOULD be happening:   4/6/22 1:01:17.000 PM testing my props.conf with a simple log collector = EventCollect01.domain.com sourcetype = generic_single_line   What can I do to pull some sort of internal variable instead of hardcoding the host?  
I have 2 Splunk Queries  First Query will return the Employee ID of the Active and Retired Employees. Second Query will return the Employee ID of the retired Employees.   I want to merge both the... See more...
I have 2 Splunk Queries  First Query will return the Employee ID of the Active and Retired Employees. Second Query will return the Employee ID of the retired Employees.   I want to merge both the queries to get the result of only the Active employees.  by removing the Retired_Employee_ID from the list of Employee_Id Query1) index=employee_data | rex field=_raw <regular expression used to extract Employee_ID>offset_field=_extracted_fields_bounds | table Employee_Id Query2) index=employee_data | rex field=_raw <regular expression used to extract Retired_Employee_ID>offset_field=_extracted_fields_bounds | table Retired_Employee_ID      
 Need my SPL to count  records, for previous calendar day:
Hi , I need small help in adding new servers to dropdown list in app dashboard.  We have some default apps in splunk search head. In one of the app there is a dashboard to monitor login rates of ... See more...
Hi , I need small help in adding new servers to dropdown list in app dashboard.  We have some default apps in splunk search head. In one of the app there is a dashboard to monitor login rates of different stadiums for time range in UTC. Those stadiums are under one index X and now they have added two more stadiums under index Y.  Now we need to add those stadiums to that dashboard dropdown to view logins . How can we include these new stadiums to that dashboard.  I'm admin here this is new task as we don't have splunk developer in the team so can anyone help me from the scratch ? Thanks in advance..!:)  
Hi, I need to convert the following into a single query that uses the EVAL command in order to perform extractions. I currently have the following: index="identitynow" |spath path=action |renam... See more...
Hi, I need to convert the following into a single query that uses the EVAL command in order to perform extractions. I currently have the following: index="identitynow" |spath path=action |rename action as authentication_method, index="identitynow" |spath path=name |rename name as authentication_service,index="identitynow" |spath path=message | rename message as reason,index="identitynow" |spath path=status |rename status as action,index="identitynow" |spath path=source |rename source as src,index="identitynow" |spath path=source_host | rename source_host as src_user_id,index="identitynow" |spath path=apiUsername |rename apiUsername as user Is it possible to use the spath function with the EVAL command? Thank you so much for all your help!
HI Experts,   we have 4 physical indexers in cluster and since few days /splunk file system storage has reached to threshold on 2 out of 4 indexers. Is there any way to equally distribute the s... See more...
HI Experts,   we have 4 physical indexers in cluster and since few days /splunk file system storage has reached to threshold on 2 out of 4 indexers. Is there any way to equally distribute the storage load on all of the 4 indexers? Does data rebalancing option help here?
Hello Community, I am having issues combining results to display in a pie chart - I tried a few things such as mvappend and it's not working correctly. I have pulled a list of Domains and want to... See more...
Hello Community, I am having issues combining results to display in a pie chart - I tried a few things such as mvappend and it's not working correctly. I have pulled a list of Domains and want to display them in a pie chart. To get the list of domains and display them in a chart I am using the following:     rex field=netbiosName "^(?<Domain>[^\\\\]+)" | stats count by Domain     This works as intended, but I have a couple of results that come up as both 'domain1' and 'domain1.com' and are displayed in the pie chart. I would like to combine these results, so that the count for both 'domain1' and 'domain1.com' is added together under just 'domain1' Thanks
I wish to start using TLS and mutual authentication for my forwarders. I'm running Splunk Enterprise  8.2.4 on Windows 2019 with a mix of windows and Linux forwarders all on 8.2.5 I have my indexers ... See more...
I wish to start using TLS and mutual authentication for my forwarders. I'm running Splunk Enterprise  8.2.4 on Windows 2019 with a mix of windows and Linux forwarders all on 8.2.5 I have my indexers using our PKI signed certificates and the forwarders are currently using the default UF certificate which is obviously not good. As such, I can get certificates issues from our PKI for the UFs but reading this guide it does not indicate what the key usage or enhanced key usage for the cert should be? I would have assumed client authentication ? Looking at the default certificates these attributes are not specified. 
Let's say I have a search and a very basic lookup table (csv). What I want to achieve is to use the values in the table for my search. So my table.csv: id name 1 first 2 s... See more...
Let's say I have a search and a very basic lookup table (csv). What I want to achieve is to use the values in the table for my search. So my table.csv: id name 1 first 2 second 3 third   Now, I want to simply run a query like which returns every single log that has any of the id's from my lookup table. index=myIndex sourcetype=mySourcetype id IN somelookup ---- where id is in table.csv's id column.   The second challenge then would be to actually have the name column values added as a field to the results for clarity.
Does anyone know of a way to reverse the order of the automatic start/end values used for bucket creation when working with timechart (or other similar commands)? For example, if I have a timechart w... See more...
Does anyone know of a way to reverse the order of the automatic start/end values used for bucket creation when working with timechart (or other similar commands)? For example, if I have a timechart with a span of 7 days and a search window of 30 days, the 7-day-buckets start with the oldest data and move forward to the most recent results, resulting in the most recent bucket having a small sample-size; maybe only two days, or three days, rather than the full seven. It seems reasonable to me that most people would care about the more recent data and less about the oldest (incomplete) data and therefore would have an option to adjust this behavior and choose to start at now() and work backwards rather than earliest() and working forward. Am I missing something? Any tips or tricks? I asked support and was basically told this is by design with no real workaround. That doesn't seem right to me. Oh, and again, this is not necessarily limited to timechart, but that's where I run into this frustration the most and where I imagine it is the . 
Hi Team, We have recently configured and ingested the Azure Active Directory Logs into Splunk. Hence we have installed the "Splunk Add-on for Microsoft Office 365" in our Heavy Forwarder server and... See more...
Hi Team, We have recently configured and ingested the Azure Active Directory Logs into Splunk. Hence we have installed the "Splunk Add-on for Microsoft Office 365" in our Heavy Forwarder server and followed the below documentation process as provided. https://blog.avotrix.com/to-collect-ad-azure-logs-to-splunk/ In our Add-On we have provided the Tenant details i.e. Tenant ID and Client ID and post which I have created the inputs --> Add Inputs "Management Activity" and provided the requested details and saved it. Then the logs were getting ingested into Splunk as desired and we are getting the relevant fields as well with required data. But one important field is missing that is the "Application Name" . So we want the Application Name field in which the user had logged on so that it will be really helpful for analysis. But the field is available in Azure AD whereas not in the Logs ingested into Splunk.  We can see the below fields are getting extracted but not the "Application Name" field and moreover in the raw logs also the field is not present. So how to get those field also ingested into Splunk as well. Sample List of fields which are getting extracted automatically. ActorContextId ActorIpAddress Actor{}.ID Actor{}.Type ApplicationId AzureActiveDirectoryEventType ClientIP CreationTime DeviceProperties{}.Name DeviceProperties{}.Value ErrorNumber ExtendedProperties{}.Name ExtendedProperties{}.Value Id InterSystemsId IntraSystemId LogonError ModifiedProperties{}.Name ModifiedProperties{}.NewValue ModifiedProperties{}.OldValue ObjectId Operation So kindly help to check on how to extract the Application Field.
I am trying to write an add-on to eliminate some values in a specific field by plugging in a file containing props.conf and transforms.conf into the splunk/etc/apps directory but failed to get any re... See more...
I am trying to write an add-on to eliminate some values in a specific field by plugging in a file containing props.conf and transforms.conf into the splunk/etc/apps directory but failed to get any result.  Please give me some advice, my configuration files are as follows: props.conf:  [source: path to the log file] TRANSFORMS-elim= elimValue transforms.conf: [elimValue] REGEX=^(type=[A-T]+) DEST_KEY=queue FORMAT=nullQueue Grateful for any help, thanks.  
hello I use 2 similar searc In the first I timechart the results   | bin _time span=1h | stats count as Pb by tutu _time | search Pb > 1 | timechart span=1h dc(tutu)   and... See more...
hello I use 2 similar searc In the first I timechart the results   | bin _time span=1h | stats count as Pb by tutu _time | search Pb > 1 | timechart span=1h dc(tutu)   and in the second I stats the results   | bin _time span=1h | stats count as Pb by tutu _time | search Pb > 1    What I dont understand is that the result of the timechart command (screen 1) for a specific hour (18h for example) is different (I just have one event) than the result of the stats command (screen 2) because I have 7 events screen 1 screen 2 thanks
hi all, i have multiple users and multiple dashboard studio. i want to check which user is downloading which dashboard. how can i do that ?  
hello I use a transpose command in a table panel     | eval time=strftime(_time,"%H:%M") | sort time | fields - _time _span _origtime | transpose 0 header_field=time column_name=KPI includ... See more...
hello I use a transpose command in a table panel     | eval time=strftime(_time,"%H:%M") | sort time | fields - _time _span _origtime | transpose 0 header_field=time column_name=KPI include_empty=true      But randomly, instead having the time field in the header, I have row1, row2, row3.... what is wrong please?  
 I am using below query to fill in 0 for dates when we have missing value and get those dates on the chart. But this is not working . Could anyone please help me here.     base search  | eval t... See more...
 I am using below query to fill in 0 for dates when we have missing value and get those dates on the chart. But this is not working . Could anyone please help me here.     base search  | eval timestamp_epoc = strptime(timestamp,"%Y-%m-%dT%H:%M:%S.%3N%Z") | eval date_picker = strftime(timestamp_epoc,"%Y-%m-%d") | search requestURI="/api/v1/home/reseller/*" | eval hqid = substr(requestURI,23,10) | search $hqid$ | eval status_success=if(httpStatus="200",1,0) | eval status_fail= if(httpStatus != "200",1,0) | stats sum(status_success) as status_success, sum(status_fail) as status_fail by hqid,date_picker | eval status = case( (status_fail&gt;0 AND status_success&gt;0), "Multiple successful logins", (status_fail&gt;0), "Multiple failed logins", (status_success&gt;0), "Successful logins",1=1, "Other") |  fillnull value=0 date_picker hqid  status | chart count(hqid) by date_picker,status  
How would you return the count of only the Reachable devices? In the picture above you would return 8. When using the query below I receive a result of 0 | stats count("SWITCHES_AND_HUBS{}.rea... See more...
How would you return the count of only the Reachable devices? In the picture above you would return 8. When using the query below I receive a result of 0 | stats count("SWITCHES_AND_HUBS{}.reachability_status"=REACHABLE)