All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @ww9rivers  Firstly, the users are granted Splunk roles based on their LDAP group in authentication.conf, and those roles or roles those roles inherit would restrict access to indexes with srch... See more...
Hi @ww9rivers  Firstly, the users are granted Splunk roles based on their LDAP group in authentication.conf, and those roles or roles those roles inherit would restrict access to indexes with srchIndexesAllowed in authorize.conf. So if users can log in to Splunk, then the Splunk roles would apply. So having Splunk cloud and on-premise LDAP shouldn't make a difference. My only guess as to the cause of this issue is, there is some role which the user has which is overriding the permissions of the owner. You will note in the documentation that the search is not actually run as the owner, but rather with the permissions of  the owner. To narrow down the issue, create a test user from GUI, and add the roles which the user has to it one by one. Try running the saved search as the test user after each role is added, to see which role is causing the issue. 
BUMP! I am having the same issue with similar config. @himaniarora20 you didnt end up finding a resolution?
Hi @madhav_dholakia  Here's what you should put in the alert's config to achieve what you want: Search: | inputlookup <file> Subject: Selfheal Alert - $result.Customer$ - $result.CheckName$ -... See more...
Hi @madhav_dholakia  Here's what you should put in the alert's config to achieve what you want: Search: | inputlookup <file> Subject: Selfheal Alert - $result.Customer$ - $result.CheckName$ - $result.Device$ - $result.MonthYear$ - $result.Status$ Trigger: For each result Throttle: [check] Supress results contain-ing field value: Device (This will prevent Splunk sending out duplicate alerts for the same device) Suppress triggering for <some time period>. Set this for however often your lookup-populating report is scheduled to run
Hi @sonalpriya Are you asking which logs from Octopus should be ingested to Splunk via HEC? Or perhaps are you asking which Splunk internal logs will show the ingestion of Octopus logs?  
Hi @ELADMIN Would you please share the search query used to generate the chart in your screenshot?
Hi @Ahmed7312 would you please share a screenshot of the error?
Hi @wpb162  It could be that the removal of the users has not propagated to all members of the SHC yet. How many members are in your SHC? How long did you leave it after running the "splunk remove u... See more...
Hi @wpb162  It could be that the removal of the users has not propagated to all members of the SHC yet. How many members are in your SHC? How long did you leave it after running the "splunk remove user" command?
Here's a couple of things to check: 1. Check the settings you have set in props.conf are actually being applied to the sourcetype: $SPLUNK_HOME/bin/splunk cmd btool props list test1:sec 2. Check i... See more...
Here's a couple of things to check: 1. Check the settings you have set in props.conf are actually being applied to the sourcetype: $SPLUNK_HOME/bin/splunk cmd btool props list test1:sec 2. Check in the _internal logs for errors related to parsing for this sourcetype: index=_internal splunk_server=* source=*splunkd.log* (component=AggregatorMiningProcessor OR component=LineBreakingProcessor OR component=DateParserVerbose) (log_level=WARN OR log_level=ERROR) data_sourcetype="test1:sec"  
Wow. For all my queries, I had been using the following fields command under the assumption it did drop _raw.   | fields _time, xxx, yyy, zzz, ....     Then one day I started a large mvex... See more...
Wow. For all my queries, I had been using the following fields command under the assumption it did drop _raw.   | fields _time, xxx, yyy, zzz, ....     Then one day I started a large mvexpand and ran into memory limit. My thought upon seeing this was 'Huh? Well, worth a try I guess.'   | fields _time, xxx, yyy, zzz, .... | fields - _raw   Boom, mvexpand completes successfully. The heck? It actually cut the search time in half too.
After an investigation long story short, this is not possible and it needs to be a new feature suggestion if someone needs to request it. The explanation is that I looked at the network logs for the... See more...
After an investigation long story short, this is not possible and it needs to be a new feature suggestion if someone needs to request it. The explanation is that I looked at the network logs for the Dashboard Studio and found the payload for base and chain searches. Base search has their own parameter in the payload called 'search'. All chained searches are grouped together in a parameter called 'postprocess_searches'. There's no other parameters that support a third search parameter and call it 'append'. It is in fact non-existent with the payload structure. Furthermore, based on the name 'postprocess_searches' parameter, it is clear only the base search gets distributable commands. All post-process searches (chained searches) happen on the searchhead only. That is an important rule to keep in mind. If you want your search to be fast, then all the compute-heavy commands need to be in the base search. Unfortunately, that means you'd need your base search to be a relatively large table of all sourcetypes appended together into a single table and do whatever necessary aggregation is required. Then use chained searches to slice and dice this large table into small bits, such as dividing by 'sourcetype' to branch out the table into multiple smaller "base" tables as the basis for additional chained searches. In my case, I formulated my base search to be a merge of 3 different sourcetypes using stats join. It is reasonably fast with the ability to distribute the base search, despite having 15+ chained searches running off of it!
Hi Jananie,  just to clarify all open cases will be migrated and available in the new Cisco Support system,  Support Case Manager regardless of age.  In addition you'll get the last 30 days of Closed... See more...
Hi Jananie,  just to clarify all open cases will be migrated and available in the new Cisco Support system,  Support Case Manager regardless of age.  In addition you'll get the last 30 days of Closed cases (ie:  from May 14 - June 14th).  I assuming for your use case above you're mostly interested in open cases (current issues) so you should be well covered.
Anyone know of any examples on SplunkBase that have JavaScript-written commands using the Python SDK? I’ve written about a dozen custom commands using Python and a familiar with that process. The d... See more...
Anyone know of any examples on SplunkBase that have JavaScript-written commands using the Python SDK? I’ve written about a dozen custom commands using Python and a familiar with that process. The dev docs suggest the Splunk SDK for Python should be used for .JS commands but I’m not understanding how that’s possible without importing libraries like Flask. https://dev.splunk.com/enterprise/docs/devtools/customsearchcommands/nonpythonscscs
Hello Everyone, I downloaded "TA_Genesys_Cloud_1.0.14_export.tgz" from https://github.com/SplunkBAUG/CCA Added the inputs required and configure the Splunk with available Genesys Cloud CX (Canada R... See more...
Hello Everyone, I downloaded "TA_Genesys_Cloud_1.0.14_export.tgz" from https://github.com/SplunkBAUG/CCA Added the inputs required and configure the Splunk with available Genesys Cloud CX (Canada Region) OAuth Credentials (client ID and Client Secret) When I try to search using index="genesys cloud", it returns with empty response   Any help will be much appreciated. Thanking you in advance  
Can you share your XML or are you talking dashboard studio? I'm not quite following what you mean by type=abc or type=def for tokens and whereabouts the pie charts fit into this.
Try like this index=prod_logs app_name="abc" | rex field=_raw "account_id\\\"\:\\\"(?<accid>[^\\]+)\\\"\,\\\"activity" | where isnotnull (accid) | table accid
What if someone mistakenly disables udp input? Just first example from the top of my head.
What do you mean by "dynamic" here? Also, you might need WRITE_META = true Also also, you might want to use ingest actions.  
Hi all, I'm trying to see if logs can be send to different indexes at index time depending on regex.  Is it possible to send logs to index name that is part of Source metadata? Below are my props... See more...
Hi all, I'm trying to see if logs can be send to different indexes at index time depending on regex.  Is it possible to send logs to index name that is part of Source metadata? Below are my props.conf and transforms.conf props.conf: [test:logs] TRANSFORMS-new_index = new_index transforms.conf [new_index] SOURCE_KEY = MetaData:Source REGEX = (?<index>\w+)\-\d+  FORMAT = $1                                       #This needs to be dynamic  DEST_KEY = _MetaData:Index Thanks in advance.
I have this app setup and installed however there are a lot of Panels  which are essentially saved searches or just search string that get ran. But many of these Panels are the same  Is there a lis... See more...
I have this app setup and installed however there are a lot of Panels  which are essentially saved searches or just search string that get ran. But many of these Panels are the same  Is there a list of Panels under FISMA so that a person can combine them to less than say 5 dashboards rather that 3-4 that are the same and 1-2 off sets that are under different categories "IE AC-5 have 3 and AC-18 having 3 with 1 being different among them. Bonus seems to me like there's got to be someone who has done this already and has a their version of a how to guide that would benefit the FISMA world or RMF world. Im just talking about getting what we could to cover as many controls as we can under a few say 3-5 dashboard so we don't have to have people clicking over 20 different places for one item of interest
I was batting around the double transpose method with a peer and got a minor variation to work regardless of whether I was throwing the top5 or top40 at it.  And it doesn't muck with the column names... See more...
I was batting around the double transpose method with a peer and got a minor variation to work regardless of whether I was throwing the top5 or top40 at it.  And it doesn't muck with the column names.  Given the simplicity, I think I am going to drop the idea of going after a table subsearch following the timechart.   There is no need since the double transpose method is clean and elegant. Thank you for the suggestion and the follow ups. Tim |fields - _span* |transpose |sort - "row 5" ```uses values from the last row (latest time) ``` |transpose header_field="column" 0 |fields - column