All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thank you for all the clarifications 
Thanks! That seemed to do it
1. Actually you can change the user Splunk runs with. It boils down to changing ownership of the installation directory and everything inside and changing configuration of the splunkforwarder service... See more...
1. Actually you can change the user Splunk runs with. It boils down to changing ownership of the installation directory and everything inside and changing configuration of the splunkforwarder service so that it logs on as another user. It's not an officially endorsed way, it's not supported but should work. 2. With older versions of UF, it was run with Local System user by default. New versions use a user with a bit more "trimmed" permissions. Of course the necessary permissions are due to what UF does, which means reading the event logs, or calling perfmon. There are also additional permissions needed to - as @_JP pointed out - to read specific files you want to ingest. Those you'll have to grant yourself. About the difference between using AD-based account and a local one - with local account you won't be able to collect data remotely over WMI (there is no way to make splunk authenticate such connection) and might have problems with ingesting files from network shares - everything that involves authenticating over the network which is normally done behind the scenes by domain mechanisms.
You can't do it like that. It's not an eval so the expression will be treated literally. You'd have to use subsearch to create that value dynamically.
We think alike.  I tried that before and although I got no error I also got no result
Hi @eranhauser ... Please check this and update us:   |makeresults | eval timeTest=strftime((floor(now()/600))*600,"%Y-%m-%d %H:%M:%S") | search index=test earliest=timeTest  
I'm assuming you know about & are modeling things after the Splunk Rest Examples?   Can you share a bit of your Python on how you are returning your data? What I'm thinking is if your handle method... See more...
I'm assuming you know about & are modeling things after the Splunk Rest Examples?   Can you share a bit of your Python on how you are returning your data? What I'm thinking is if your handle method isn't returning your data in the correct JSON format it needs to generate the response.
Try something like this: | stats dc(Txn_id) as unique_tx_ids count avg(response_time) as average by Referer | eval average_count_txns_id=count/unique_tx_ids
I am working on setting up a third party evaluation of a new network management and security monitoring installation for an enterprise network that uses Splunk for various log aggregation purposes. T... See more...
I am working on setting up a third party evaluation of a new network management and security monitoring installation for an enterprise network that uses Splunk for various log aggregation purposes. The environment has 6 indexers with duplication across 3 sites, and hundreds of indexes set up and configured by the installers. The questions that I need to write a test for: "Is there sufficient storage available for compliance with data retention policies? (e.g. is there sufficient storage available to meet 5 year retention guidelines for audit logs?)" I would like to run simple search strings to produce the necessary data tables. I am no wizard at writing the appropriate queries, and I don't have access to an environment that is complicated enough to try these things out before I have limited time on the production environment to run my reports. After reading through the forums for hours, it seems like answering this storage question may be harder than originally anticipated, as Splunk does not seem to have any default awareness of how much on disk space it is actually consuming.   1. Research has shown that I need to make sure that the age off and size cap for each index is appropriately set with the FrozenTimePeriodInSecs and maxTotalDataSizeMB variables in each index.conf file. Is there a search I can run that will provide a simple table for all indexes across the environment with these two variables? e.g. index name, server, FrozenTimePeriodInSecs, maxTotalDataSizeMB 2. Is there any other configuration where allocated space is determined for an index that can be returned with a search?   3.  Is there a search string I can run to show the current storage consumption (size on disk) for all indexes on all servers? I have seen some options here on the forums and I think the answer for this one might be the following:    | dbinspect index=* | eval sizeOnDiskGB=sizeOnDiskMB/1024 | eval rawSizeGB=rawSize/1024 | stats sum(rawSizeGB) AS rawTotalGB, sum(sizeOnDiskGB) AS sizeOnDiskTotalGB BY index, splunk_server     4. What is the best search string to determine the average daily ingest "size on disk" by index and server/indexer to calculate required storage needed for retention policy purposes? So far, I have found something like this: index="_internal" source="*metrics.log" per_index_thruput source="/opt/splunk/var/log/splunk/metrics.log" | eval gb=kb/1024/1024 | timechart span=1d sum(gb) as "Total Per Day" by series useother=f | fields - VALUE_* I'm not sure quite what is happening above with the useother=f or the last line of the search. the thread I found it on is dead enough that I don't expect a reply.  I would need any/all results from these three searches in table format sorted by index, server to match up with the other searches for simple compilation. Any help that can be provided is greatly appreciated.
1. Yes, you can setup Splunk to run as a local user in Windows.  But, per the docs it has to be setup during install, or before you have started Splunk for the first time.  Keep in mind that Splunk i... See more...
1. Yes, you can setup Splunk to run as a local user in Windows.  But, per the docs it has to be setup during install, or before you have started Splunk for the first time.  Keep in mind that Splunk is very file-configuration driven, so even if you have to delete and re-install, I'm assuming your Forwarder is getting hooked up to a Deployment Server to get its configurations after your base install. 2. The main thing is the user needs to be an Administrator.  From there, your limitations are going to be what that user has access to - e.g. can the user you're running Splunk as have permissions to view the files you want to ingest.  More info here in docs on choosing your user.
Hello All, I'm a relative newbie and hoping the community can help me out. I'm kind of stuck on a query and I can't figure out how to get the correct results.   I have an event that has a re... See more...
Hello All, I'm a relative newbie and hoping the community can help me out. I'm kind of stuck on a query and I can't figure out how to get the correct results.   I have an event that has a referer and a txn_id. Multiple events with the same referer field can have the same txn_id.     Referer Txn_id response_time google abcd1234 42 google abcd1234 43 google abcd1234 44 google 1234abcd 45 google 1234abcd 46 google 1234abcd 47 google 1234abcd 48 yahoo xyz123 110 yahoo 123xyx  120 yahoo 123xyz 130   What I am trying to do is get the average number of txn_ids per referer and the avg of response times for that. So something like this:     Referer avg(count txn_id) avg(response_time) google 3.5 44.5 yahoo 1.5 120   Any help would be appreciated. Thanks!
Hi, I am working on a query where i need to display the table based on the multiselect input. multi-select input options are : (nf, sf, etc) When i select "nf " then only columns starts with "... See more...
Hi, I am working on a query where i need to display the table based on the multiselect input. multi-select input options are : (nf, sf, etc) When i select "nf " then only columns starts with "nf" should display along with "user" and "role" and also display the columns in same order as it is mentioned, similarly to be applied  if i am selecting multiple options from the multi-select input as well but,  iam facing a issue while fetching the table in same order. i have tried using  |<search query> | stats list(*) as * by user, role but this one jumbles the column placement in alphabetical order, which i don't want to. also, tried using set tokens by giving the field_name starts with "nf" in one token and sf in another token. |< search query> | table user, role, $nf_fields$ $,sf_fields$ by trying this method also faced an issue example: if i am selecting only sf from the multi select input then the fields starts with nf also displayed with empty values   --> Is it possible to fix the placement of the columns. or, --> removing the empty columns based on the multi-select input both approaches works for me. Expected Output: please help me to solve this. Thanks in advance.
How I can assign a value to the earliest argument in my query which is the rounded to the last 10 minutes? when I try index=aaa earliest=((floor(now()/600))*600      I get an error that ((floor(now(... See more...
How I can assign a value to the earliest argument in my query which is the rounded to the last 10 minutes? when I try index=aaa earliest=((floor(now()/600))*600      I get an error that ((floor(now()/600))*600 is an invalid term
So, this is a duration test.  In that case, why not use min/max to test boundary? index=cisco sourcetype=cisco:wlc snmpTrapOID_0="CISCO-LWAPP-AP-MIB::ciscoLwappApRogueDetected" |dedup cLApName_0 |s... See more...
So, this is a duration test.  In that case, why not use min/max to test boundary? index=cisco sourcetype=cisco:wlc snmpTrapOID_0="CISCO-LWAPP-AP-MIB::ciscoLwappApRogueDetected" |dedup cLApName_0 |stats list(cLApName_0) as cLApName_0 min(_time) as first_rogue max(_time) last_rogue by RogueApMacAddress |where last_rogue - first_rogue > 86400 |rename cLApName_0 as "HQ AP" | fieldformat first_rogue = strftime(first_rogue, "%F %H:%M:%S") | fieldformat last_rogue = strftime(last_rogue, "%F %H:%M:%S") You should probably use values instead of list, too.  Not sure what value list adds to your quest.  But if you use values, dedup is no longer needed.
I'm installing Splunk Universal Frowarder using the following command: choco install splunk-universalforwarder --version=9.0.5 --install-arguments='DEPLOYMENT_SERVER=<server_address>:<server_port>' ... See more...
I'm installing Splunk Universal Frowarder using the following command: choco install splunk-universalforwarder --version=9.0.5 --install-arguments='DEPLOYMENT_SERVER=<server_address>:<server_port>' This install a SplunkForwarder service that runs with the user NT SERVICES/SplunkForwarder. Reading the documentation, this account is a virtual account which are managed local accounts.  Despite being described as managed local accounts, the documentation also states that "Services that run as virtual accounts access network resources by using the credentials of the computer account in the format <domain_name>\<computer_name>$."  Currently, my windows machines are joined to the AD Domain but I'm working to change it and to not join them to the AD in the future. I have a couple questions here: Can I use this default user (NT SERVICES/SplunkForwarder) even without joining the VM to the AD domain ? What are the limitations that I will face changing from this NT SERVICES account to a local account ? Thanks.
Looking at it again the token is working. The search was waiting for other tokens to be populated. 
These are two different issues. Tokens in dashboard get filled before the search is dispatched so if you have error in such case, there is probably something wrong with your token.  Post your xml so ... See more...
These are two different issues. Tokens in dashboard get filled before the search is dispatched so if you have error in such case, there is probably something wrong with your token.  Post your xml so we can see. The other thing - it doesn't work like that. Eval sets a value for processed results. It's not a "variable" that can be passed to another part of the pipeline. There are some dynamic searching techniques in splunk but this is not it and it's not your main problem here I think.
I can't seem to be able to set a variable or a token to the window parameter in the streamstats command.    | streamstats avg(count) as avg_count window=$window_token$ | eval c = 2 | streamstats a... See more...
I can't seem to be able to set a variable or a token to the window parameter in the streamstats command.    | streamstats avg(count) as avg_count window=$window_token$ | eval c = 2 | streamstats avg(count) as avg_count window=c   I get the error saying the option value is not an integer. Seems it doesn't take the value of the variable/token. Is there any way to change the parameter dynamically? "Invalid option value. Expecting a 'non-negative integer' for option 'window'. Instead got 'c'."
Check logs more "backwards" to see earlier errors. Maybe you mistyped file paths, maybe the password was wrong...
Just find all events with system_id=aa-1* initially (to limit the number of events you're working with) and then use then regex command to limit the values only to aa-1(-.*)?