Try this:
| rest /services/authentication/users splunk_server=local
| fields roles title realname
| rename title as username
| search roles=admin
Change the value of roles in the last bit to the role you want to search on.
If you want a table of all roles and users assigned to each role, try this:
| rest /services/authentication/users splunk_server=local
| fields roles title realname
| rename title as username
| sort roles
... View more
You can do this via rest. Try this first to get the list of apps:
| rest /services/apps/local | table label version
If you want a count of all the apps:
| rest /services/apps/local | stats count as numberofapps
If you just want enabled apps, you could use:
| rest /services/apps/local | search disabled=0 | stats count as numberofapps
... View more
Is it two different Splunk environments? Can you provide more information on what the reason for load balancing across two different groups would be?
... View more
In my base search it is only looking where search_id has a value. That could be the difference. Try taking that out of my search to see if thats it. My search extracts the sourcetype and the index from the search field, which would account for the difference in what you are seeing for the sourcetype. Either way, I think you should be close to what you were looking for.
... View more
With Splunk Cloud, you won't have CLI access to the files, however, you should be able to add/delete/edit indexes via the GUI (Search Head).
You could also get REST API access to your Splunk Cloud instance for taking a look at the indexes. There are no POST options though.
https://docs.splunk.com/Documentation/SplunkCloud/6.6.0/RESTTUT/RESTandCloud
... View more
Try something like this:
index=_audit action=search info=granted search=* NOT "search_id='scheduler" NOT "search='|history" NOT "user=splunk-system-user" NOT "search='typeahead" NOT "search='| metadata type=* | search totalCount>0"
| rex field=search "index=(?P<search_index>[^ ]+)"
| rex field=search "sourcetype=(?P<search_sourcetype>[^ ]+)"
| eval time=strftime(_time, "%m/%d/%y %H:%M:%S")
| rex field=search_index "\"(?P<search_index>\w+)"
| rex field=search_sourcetype "\"(?P<search_sourcetype>\w+)"
| stats max(time) as last_searched by search_index search_sourcetype
| sort -search_index -search_sourcetype
It will produce results like this:
... View more
I would take a look at the Analysis of Splunkbase Apps app: https://splunkbase.splunk.com/app/2919/
"This App provides a simple dashboard with App stats and allows you to search for Splunk Apps within Splunk. It was also designed to work if you are offline, as long as you have been online once to collect data.
This App can be used to determine which Apps are certified to work with Splunk Cloud, Hunk, ES, etc."
It will show the last update on one of the dashboards which may be what you are looking for. If not exactly, then you could always tweak the search behind the dashboard.
... View more
First, I would ask you if that field is required for any dashboards, searches, lookups (you indicated it was being used for this). I'm making the assumption that you are using an app or add-on, so you would probably want to go into the props.conf in the app/add-ons local directory and modify the existing field extraction to change the name from action to something else. You would need to modify any dashboards, searches, etc... that reference the original field action so they don't break.
As for the new field, you could create it via the GUI based field extractor, or you could add the extraction while you are in the props.conf file changing the name of the original action field.
If this is not required to be a permanent thing, meaning, you just need to do it for a search or something, then you could handle it in the search itself:
... | eval action_orig = action | rex field=_raw "REGEX HERE (?REGEX TO CAPTURE FIELD VALUE) REGEX HERE"
Since I have no idea what your regex would be to capture the value for the new action field, you would need to replace the things in capital letter. Just use the capture group to populate the field action.
... View more
To adonio's point, I would take a look at this to see if your search can be written more efficiently:
http://docs.splunk.com/Documentation/Splunk/6.6.0/Search/Writebettersearches
Selecting an appropriate time window is the first place you should start. Next, you should search by a specific index or indexes. This prevents Splunk from having to open up buckets of data that aren't even relevant to your search. Another tip would be to use host, source, or sourcetype early in your search. This is metadata that is added by Splunk on ingestion and can help make your searches more efficient.
These are all common things that people run into when searching, so apologies if you have already taken into account many of these things.
... View more
Have you taken a look at the Splunk 6.x Dashboard examples app, available on Splunkbase?
https://splunkbase.splunk.com/app/1603/
This is a great app for learning how to do different types of visualizations.
For each example, you can see the XML for the dashboard, as well as any javascript or css that is required for that example.
... View more
Check out the accepted answer in this post:
https://answers.splunk.com/answers/200468/round-problem.html
I have tested this (see screenshot)
... View more
Is your stats command counting by services? I had changed the name in my example to services_nonum. You could either use that or change that to services and leave the stats command line alone.
... View more
You could use the same method on that field:
[BASE SEARCH]
| eval tmp=split(signature_id,":")
|eval services=mvindex(tmp,1)
| eval tmp2 = split(services,"-")
| eval services_nonum = mvindex(tmp2,0)
... View more
You are splitting your field on the "-" delimiter instead of the ":". Also, in your mvindex, you want 1, not 0. The 0 would be the first value, or "AV" in your example.
... View more
Try this method:
https://answers.splunk.com/answers/109253/how-to-filter-or-extract-fields-before-indexing-time.html
I have tested this with a CSV file and it works.
SAMPLE DATA
field1,field2,field3,field4
a1,a2,a3,a4
b1,b2,b3,b4
c1,c2,c3,c4
props.conf
[excludefields_ex]
TRANSFORMS-somefields = somefields
transforms.conf
[somefields]
DEST_KEY = _raw
REGEX = (\S+),(\S+),(\S+),(\S+)
FORMAT = $1 $3
... View more
What is the reasoning behind showing the row with "Yellow". Is this the latest status?
If so, you could try:
[YOUR CURRENT SEARCH]
| sort -Status
| dedup Track_Name
... View more
Take a look at the section titled "Wildcards and regular expression metacharacters" in this section of the documentation: http://docs.splunk.com/Documentation/Splunk/6.6.0/Data/Specifyinputpathswithwildcards
According to the docs, "If the regular expression metacharacters occur within or after a segment that contains a wildcard, Splunk Enterprise treats the metacharacters as a regular expression and matches files to monitor accordingly."
By segment, it means the blocks of text between directory separators. So this looks like it would work. The one thing you might need to change is add a + after the [0-9]. Otherwise, it will only look for a single digit and from your question it sounded like it could be more than a single digit.
... View more