All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I'm assuming you know about & are modeling things after the Splunk Rest Examples?   Can you share a bit of your Python on how you are returning your data? What I'm thinking is if your handle method... See more...
I'm assuming you know about & are modeling things after the Splunk Rest Examples?   Can you share a bit of your Python on how you are returning your data? What I'm thinking is if your handle method isn't returning your data in the correct JSON format it needs to generate the response.
Try something like this: | stats dc(Txn_id) as unique_tx_ids count avg(response_time) as average by Referer | eval average_count_txns_id=count/unique_tx_ids
I am working on setting up a third party evaluation of a new network management and security monitoring installation for an enterprise network that uses Splunk for various log aggregation purposes. T... See more...
I am working on setting up a third party evaluation of a new network management and security monitoring installation for an enterprise network that uses Splunk for various log aggregation purposes. The environment has 6 indexers with duplication across 3 sites, and hundreds of indexes set up and configured by the installers. The questions that I need to write a test for: "Is there sufficient storage available for compliance with data retention policies? (e.g. is there sufficient storage available to meet 5 year retention guidelines for audit logs?)" I would like to run simple search strings to produce the necessary data tables. I am no wizard at writing the appropriate queries, and I don't have access to an environment that is complicated enough to try these things out before I have limited time on the production environment to run my reports. After reading through the forums for hours, it seems like answering this storage question may be harder than originally anticipated, as Splunk does not seem to have any default awareness of how much on disk space it is actually consuming.   1. Research has shown that I need to make sure that the age off and size cap for each index is appropriately set with the FrozenTimePeriodInSecs and maxTotalDataSizeMB variables in each index.conf file. Is there a search I can run that will provide a simple table for all indexes across the environment with these two variables? e.g. index name, server, FrozenTimePeriodInSecs, maxTotalDataSizeMB 2. Is there any other configuration where allocated space is determined for an index that can be returned with a search?   3.  Is there a search string I can run to show the current storage consumption (size on disk) for all indexes on all servers? I have seen some options here on the forums and I think the answer for this one might be the following:    | dbinspect index=* | eval sizeOnDiskGB=sizeOnDiskMB/1024 | eval rawSizeGB=rawSize/1024 | stats sum(rawSizeGB) AS rawTotalGB, sum(sizeOnDiskGB) AS sizeOnDiskTotalGB BY index, splunk_server     4. What is the best search string to determine the average daily ingest "size on disk" by index and server/indexer to calculate required storage needed for retention policy purposes? So far, I have found something like this: index="_internal" source="*metrics.log" per_index_thruput source="/opt/splunk/var/log/splunk/metrics.log" | eval gb=kb/1024/1024 | timechart span=1d sum(gb) as "Total Per Day" by series useother=f | fields - VALUE_* I'm not sure quite what is happening above with the useother=f or the last line of the search. the thread I found it on is dead enough that I don't expect a reply.  I would need any/all results from these three searches in table format sorted by index, server to match up with the other searches for simple compilation. Any help that can be provided is greatly appreciated.
1. Yes, you can setup Splunk to run as a local user in Windows.  But, per the docs it has to be setup during install, or before you have started Splunk for the first time.  Keep in mind that Splunk i... See more...
1. Yes, you can setup Splunk to run as a local user in Windows.  But, per the docs it has to be setup during install, or before you have started Splunk for the first time.  Keep in mind that Splunk is very file-configuration driven, so even if you have to delete and re-install, I'm assuming your Forwarder is getting hooked up to a Deployment Server to get its configurations after your base install. 2. The main thing is the user needs to be an Administrator.  From there, your limitations are going to be what that user has access to - e.g. can the user you're running Splunk as have permissions to view the files you want to ingest.  More info here in docs on choosing your user.
Hello All, I'm a relative newbie and hoping the community can help me out. I'm kind of stuck on a query and I can't figure out how to get the correct results.   I have an event that has a re... See more...
Hello All, I'm a relative newbie and hoping the community can help me out. I'm kind of stuck on a query and I can't figure out how to get the correct results.   I have an event that has a referer and a txn_id. Multiple events with the same referer field can have the same txn_id.     Referer Txn_id response_time google abcd1234 42 google abcd1234 43 google abcd1234 44 google 1234abcd 45 google 1234abcd 46 google 1234abcd 47 google 1234abcd 48 yahoo xyz123 110 yahoo 123xyx  120 yahoo 123xyz 130   What I am trying to do is get the average number of txn_ids per referer and the avg of response times for that. So something like this:     Referer avg(count txn_id) avg(response_time) google 3.5 44.5 yahoo 1.5 120   Any help would be appreciated. Thanks!
Hi, I am working on a query where i need to display the table based on the multiselect input. multi-select input options are : (nf, sf, etc) When i select "nf " then only columns starts with "... See more...
Hi, I am working on a query where i need to display the table based on the multiselect input. multi-select input options are : (nf, sf, etc) When i select "nf " then only columns starts with "nf" should display along with "user" and "role" and also display the columns in same order as it is mentioned, similarly to be applied  if i am selecting multiple options from the multi-select input as well but,  iam facing a issue while fetching the table in same order. i have tried using  |<search query> | stats list(*) as * by user, role but this one jumbles the column placement in alphabetical order, which i don't want to. also, tried using set tokens by giving the field_name starts with "nf" in one token and sf in another token. |< search query> | table user, role, $nf_fields$ $,sf_fields$ by trying this method also faced an issue example: if i am selecting only sf from the multi select input then the fields starts with nf also displayed with empty values   --> Is it possible to fix the placement of the columns. or, --> removing the empty columns based on the multi-select input both approaches works for me. Expected Output: please help me to solve this. Thanks in advance.
How I can assign a value to the earliest argument in my query which is the rounded to the last 10 minutes? when I try index=aaa earliest=((floor(now()/600))*600      I get an error that ((floor(now(... See more...
How I can assign a value to the earliest argument in my query which is the rounded to the last 10 minutes? when I try index=aaa earliest=((floor(now()/600))*600      I get an error that ((floor(now()/600))*600 is an invalid term
So, this is a duration test.  In that case, why not use min/max to test boundary? index=cisco sourcetype=cisco:wlc snmpTrapOID_0="CISCO-LWAPP-AP-MIB::ciscoLwappApRogueDetected" |dedup cLApName_0 |s... See more...
So, this is a duration test.  In that case, why not use min/max to test boundary? index=cisco sourcetype=cisco:wlc snmpTrapOID_0="CISCO-LWAPP-AP-MIB::ciscoLwappApRogueDetected" |dedup cLApName_0 |stats list(cLApName_0) as cLApName_0 min(_time) as first_rogue max(_time) last_rogue by RogueApMacAddress |where last_rogue - first_rogue > 86400 |rename cLApName_0 as "HQ AP" | fieldformat first_rogue = strftime(first_rogue, "%F %H:%M:%S") | fieldformat last_rogue = strftime(last_rogue, "%F %H:%M:%S") You should probably use values instead of list, too.  Not sure what value list adds to your quest.  But if you use values, dedup is no longer needed.
I'm installing Splunk Universal Frowarder using the following command: choco install splunk-universalforwarder --version=9.0.5 --install-arguments='DEPLOYMENT_SERVER=<server_address>:<server_port>' ... See more...
I'm installing Splunk Universal Frowarder using the following command: choco install splunk-universalforwarder --version=9.0.5 --install-arguments='DEPLOYMENT_SERVER=<server_address>:<server_port>' This install a SplunkForwarder service that runs with the user NT SERVICES/SplunkForwarder. Reading the documentation, this account is a virtual account which are managed local accounts.  Despite being described as managed local accounts, the documentation also states that "Services that run as virtual accounts access network resources by using the credentials of the computer account in the format <domain_name>\<computer_name>$."  Currently, my windows machines are joined to the AD Domain but I'm working to change it and to not join them to the AD in the future. I have a couple questions here: Can I use this default user (NT SERVICES/SplunkForwarder) even without joining the VM to the AD domain ? What are the limitations that I will face changing from this NT SERVICES account to a local account ? Thanks.
Looking at it again the token is working. The search was waiting for other tokens to be populated. 
These are two different issues. Tokens in dashboard get filled before the search is dispatched so if you have error in such case, there is probably something wrong with your token.  Post your xml so ... See more...
These are two different issues. Tokens in dashboard get filled before the search is dispatched so if you have error in such case, there is probably something wrong with your token.  Post your xml so we can see. The other thing - it doesn't work like that. Eval sets a value for processed results. It's not a "variable" that can be passed to another part of the pipeline. There are some dynamic searching techniques in splunk but this is not it and it's not your main problem here I think.
I can't seem to be able to set a variable or a token to the window parameter in the streamstats command.    | streamstats avg(count) as avg_count window=$window_token$ | eval c = 2 | streamstats a... See more...
I can't seem to be able to set a variable or a token to the window parameter in the streamstats command.    | streamstats avg(count) as avg_count window=$window_token$ | eval c = 2 | streamstats avg(count) as avg_count window=c   I get the error saying the option value is not an integer. Seems it doesn't take the value of the variable/token. Is there any way to change the parameter dynamically? "Invalid option value. Expecting a 'non-negative integer' for option 'window'. Instead got 'c'."
Check logs more "backwards" to see earlier errors. Maybe you mistyped file paths, maybe the password was wrong...
Just find all events with system_id=aa-1* initially (to limit the number of events you're working with) and then use then regex command to limit the values only to aa-1(-.*)?
I was able to get things to work with makeresults and a mocked up Dashboard.  How does this work for you on your end:     <form version="1.1" theme="dark"> <label>Test Dashboard</label> <field... See more...
I was able to get things to work with makeresults and a mocked up Dashboard.  How does this work for you on your end:     <form version="1.1" theme="dark"> <label>Test Dashboard</label> <fieldset submitButton="false"> <input type="dropdown" token="system_id" searchWhenChanged="true"> <label>system_id</label> <choice value="*">*</choice> <choice value="AA-1">AA-1</choice> <choice value="AA-2">AA-2</choice> <choice value="AA-10">AA-10</choice> <initialValue>*</initialValue> <default>*</default> </input> </fieldset> <row> <panel> <table> <search> <query>| makeresults format="json" data="[{\"system_id\":\"AA-1\"}, {\"system_id\":\"AA-2\"}, {\"system_id\":\"AA-10\"}, {\"system_id\":\"AA-15\"}, {\"system_id\":\"AA-1\"}, {\"system_id\":\"AA-123\"}, {\"system_id\":\"aa-1-a\"}]" , {\"system_id\":\"aa-1-b\"} | search system_id="$system_id$"</query> <earliest>-24h@h</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> </search> <option name="count">20</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> </table> </panel> </row> </form>  
There is a difference between "is supported" and "will work on". 9.0.1 was released before (or roughly at the same time) as 6.0 kernel so it's not on an officially supported list. It doesn't matter t... See more...
There is a difference between "is supported" and "will work on". 9.0.1 was released before (or roughly at the same time) as 6.0 kernel so it's not on an officially supported list. It doesn't matter thought that it won't work. It's a userland program so it should be relatively independent on the kernel version.
Hi @Sthitha.Madhiraju, Check out this existing TKB article https://community.appdynamics.com/t5/Knowledge-Base/Why-is-the-Java-Agent-not-reporting-to-the-Controller/ta-p/13974 Let me know if it h... See more...
Hi @Sthitha.Madhiraju, Check out this existing TKB article https://community.appdynamics.com/t5/Knowledge-Base/Why-is-the-Java-Agent-not-reporting-to-the-Controller/ta-p/13974 Let me know if it helps. 
These are the officially available types of license. https://docs.splunk.com/Documentation/Splunk/9.1.1/Admin/TypesofSplunklicenses There is no "edu" license as such. There could be special pricing... See more...
These are the officially available types of license. https://docs.splunk.com/Documentation/Splunk/9.1.1/Admin/TypesofSplunklicenses There is no "edu" license as such. There could be special pricing for edu organizations but you should verify it with sales representative, not on community forum
A couple steps to troubleshoot: - If you remove the SSL, can you get Splunk to startup and listen on that port?   - Are your paths 100% correct - this could be related to a typo in the path/filen... See more...
A couple steps to troubleshoot: - If you remove the SSL, can you get Splunk to startup and listen on that port?   - Are your paths 100% correct - this could be related to a typo in the path/filename. - Do your certificates have the correct permissions so Spunk can see them?   As a side note, Splunk will auto-encrypt passwords like that in your .conf files. You'll see the following wording for values it does this with in the documentation (e.g. inputs.conf sslPassword documentation) Upon first use, the input encrypts and rewrites the password