All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @Splunk-Star , Let me understand: this is one of your events, if you have many events, they are displayed, if you have only one event, it isn't displayed, is it correct? have you this issu... See more...
Hi @Splunk-Star , Let me understand: this is one of your events, if you have many events, they are displayed, if you have only one event, it isn't displayed, is it correct? have you this issue only with these logs or also with other logs? Maybe the issue is related to the lenght of the log, I encountered an issue with very long logs, that were displayed with a very long delay for their dimension. Did you tried to truncate it e.g. with substr: index = "*" "a39d0417-bc8e-41fd-ae1f-7ed5566caed6" "*uploadstat*" status=Processed | eval _raw=substr(_raw,100) Ciao. Giuseppe  
Hi @iam_ironman , only one question: why? indexes aren't database tables, indexes are containers where logs are stored, the log categorization is done with sourcetype field. usually custom indexes... See more...
Hi @iam_ironman , only one question: why? indexes aren't database tables, indexes are containers where logs are stored, the log categorization is done with sourcetype field. usually custom indexes are mainly created when there are different requirements about retention and grant accesses and secondary for different log volumes. So why do you want to create so many indexes, that you have to maintain and that after a retention time, will be empty? Enyway, the rex you used is wrong, you don't need to extract the index field to assign a dinamic value to this field, you have to identify a group and use it for the index value: [new_index] SOURCE_KEY = MetaData:Source REGEX = ^(\w+)\-\d+ FORMAT = $1 DEST_KEY = _MetaData:Index Ciao. Giuseppe
I have a set of DCs from where i need to monitor the Device logs which is located in a shared path.. I tried entering the below stanzas for each server and DC separately which worked. But when I am ... See more...
I have a set of DCs from where i need to monitor the Device logs which is located in a shared path.. I tried entering the below stanzas for each server and DC separately which worked. But when I am trying to standardise this monitoring with a pattern to avoid pushing the configs each time, it did not work. Can you let me know where its going wrong?? [monitor://\\azwvocasp00005\PRDC_DeviceLogs\DeviceLogs] disabled = 0 recursive = true sourcetype = Vocollect:DeviceLog index = rpl_winos_application_prod Now am trying: [monitor://\\azwvocasp000*\*DC_DeviceLogs\DeviceLogs] disabled = 0 recursive = true sourcetype = Vocollect:DeviceLog index = rpl_winos_application_prod Thanks in Advance  
I did actually..  Just found out that you need to try with the indexer instead of Search Head. And also, attach an IAM role to your Splunk server, and  use the ARN of that same role to attach to your... See more...
I did actually..  Just found out that you need to try with the indexer instead of Search Head. And also, attach an IAM role to your Splunk server, and  use the ARN of that same role to attach to your Splunk config.
I do not know much about Cribl, but these settings in props.conf might help: props.conf on UF: [test] EVENT_BREAKER_ENABLE=true EVENT_BREAKER=([\r\n]+)\{ \"__CURSOR\" props.conf on Indexer: (assum... See more...
I do not know much about Cribl, but these settings in props.conf might help: props.conf on UF: [test] EVENT_BREAKER_ENABLE=true EVENT_BREAKER=([\r\n]+)\{ \"__CURSOR\" props.conf on Indexer: (assuming REALTIME_TIMESTAMP is the timestamp field) [test] KV_MODE=JSON SHOULD_LINEMERGE=false LINE_BREAKER=([\r\n]+)\{ \"__CURSOR\" MUST_BREAK_AFTER=\} TIME_PREFIX=\"__REALTIME_TIMESTAMP\"\s\:\s\" TIME_FORMAT=%s%6N MAX_TIMESTAMP_LOOKAHEAD=18  
yes, the data is sent from the Splunk UF --> Cribl (Stream / Worker) --> Splunk Indexer
So after much thought and deliberation, this is how you can see the real-time EPS and the trends around it on UBA. 1. You would need to add the parameter ?system in the url right before the # valu... See more...
So after much thought and deliberation, this is how you can see the real-time EPS and the trends around it on UBA. 1. You would need to add the parameter ?system in the url right before the # values. 2. Once done, proceed to Manage -> Data Sources -> Select Data Source to reveal the real-time EPS and trends associated with it.
Dear Regina, I am more interested in closed cases. Let me explain my view. For eg. I get an issue with JAVA agent working with some different application which is rarely used by customers around the... See more...
Dear Regina, I am more interested in closed cases. Let me explain my view. For eg. I get an issue with JAVA agent working with some different application which is rarely used by customers around the world. And such case was already experienced by some customer and support has provided a solution after lot of troubleshooting and resolved it. Now that ticket was closed back in 2023. It is a collection of brainstorms from experts and a great knowledge base. If the closed cases become volatile with this new migration, then when the similar issue occurs support team, consultants and customers have to again sit for hours to find out the solution.  There are many instances similar to this. It will take years for Cisco to build such a valuable knowledge base again. My humble request is to have the database of older just as reference point instead of deleting it forever. Thanks for considering my request. Jananie
You mentioned in your post you are using UF to send the data. Is the data going from Splunk UF --> Cribl --> Splunk indexer?
Hi @ww9rivers  Firstly, the users are granted Splunk roles based on their LDAP group in authentication.conf, and those roles or roles those roles inherit would restrict access to indexes with srch... See more...
Hi @ww9rivers  Firstly, the users are granted Splunk roles based on their LDAP group in authentication.conf, and those roles or roles those roles inherit would restrict access to indexes with srchIndexesAllowed in authorize.conf. So if users can log in to Splunk, then the Splunk roles would apply. So having Splunk cloud and on-premise LDAP shouldn't make a difference. My only guess as to the cause of this issue is, there is some role which the user has which is overriding the permissions of the owner. You will note in the documentation that the search is not actually run as the owner, but rather with the permissions of  the owner. To narrow down the issue, create a test user from GUI, and add the roles which the user has to it one by one. Try running the saved search as the test user after each role is added, to see which role is causing the issue. 
BUMP! I am having the same issue with similar config. @himaniarora20 you didnt end up finding a resolution?
Hi @madhav_dholakia  Here's what you should put in the alert's config to achieve what you want: Search: | inputlookup <file> Subject: Selfheal Alert - $result.Customer$ - $result.CheckName$ -... See more...
Hi @madhav_dholakia  Here's what you should put in the alert's config to achieve what you want: Search: | inputlookup <file> Subject: Selfheal Alert - $result.Customer$ - $result.CheckName$ - $result.Device$ - $result.MonthYear$ - $result.Status$ Trigger: For each result Throttle: [check] Supress results contain-ing field value: Device (This will prevent Splunk sending out duplicate alerts for the same device) Suppress triggering for <some time period>. Set this for however often your lookup-populating report is scheduled to run
Hi @sonalpriya Are you asking which logs from Octopus should be ingested to Splunk via HEC? Or perhaps are you asking which Splunk internal logs will show the ingestion of Octopus logs?  
Hi @ELADMIN Would you please share the search query used to generate the chart in your screenshot?
Hi @Ahmed7312 would you please share a screenshot of the error?
Hi @wpb162  It could be that the removal of the users has not propagated to all members of the SHC yet. How many members are in your SHC? How long did you leave it after running the "splunk remove u... See more...
Hi @wpb162  It could be that the removal of the users has not propagated to all members of the SHC yet. How many members are in your SHC? How long did you leave it after running the "splunk remove user" command?
Here's a couple of things to check: 1. Check the settings you have set in props.conf are actually being applied to the sourcetype: $SPLUNK_HOME/bin/splunk cmd btool props list test1:sec 2. Check i... See more...
Here's a couple of things to check: 1. Check the settings you have set in props.conf are actually being applied to the sourcetype: $SPLUNK_HOME/bin/splunk cmd btool props list test1:sec 2. Check in the _internal logs for errors related to parsing for this sourcetype: index=_internal splunk_server=* source=*splunkd.log* (component=AggregatorMiningProcessor OR component=LineBreakingProcessor OR component=DateParserVerbose) (log_level=WARN OR log_level=ERROR) data_sourcetype="test1:sec"  
Wow. For all my queries, I had been using the following fields command under the assumption it did drop _raw.   | fields _time, xxx, yyy, zzz, ....     Then one day I started a large mvex... See more...
Wow. For all my queries, I had been using the following fields command under the assumption it did drop _raw.   | fields _time, xxx, yyy, zzz, ....     Then one day I started a large mvexpand and ran into memory limit. My thought upon seeing this was 'Huh? Well, worth a try I guess.'   | fields _time, xxx, yyy, zzz, .... | fields - _raw   Boom, mvexpand completes successfully. The heck? It actually cut the search time in half too.
After an investigation long story short, this is not possible and it needs to be a new feature suggestion if someone needs to request it. The explanation is that I looked at the network logs for the... See more...
After an investigation long story short, this is not possible and it needs to be a new feature suggestion if someone needs to request it. The explanation is that I looked at the network logs for the Dashboard Studio and found the payload for base and chain searches. Base search has their own parameter in the payload called 'search'. All chained searches are grouped together in a parameter called 'postprocess_searches'. There's no other parameters that support a third search parameter and call it 'append'. It is in fact non-existent with the payload structure. Furthermore, based on the name 'postprocess_searches' parameter, it is clear only the base search gets distributable commands. All post-process searches (chained searches) happen on the searchhead only. That is an important rule to keep in mind. If you want your search to be fast, then all the compute-heavy commands need to be in the base search. Unfortunately, that means you'd need your base search to be a relatively large table of all sourcetypes appended together into a single table and do whatever necessary aggregation is required. Then use chained searches to slice and dice this large table into small bits, such as dividing by 'sourcetype' to branch out the table into multiple smaller "base" tables as the basis for additional chained searches. In my case, I formulated my base search to be a merge of 3 different sourcetypes using stats join. It is reasonably fast with the ability to distribute the base search, despite having 15+ chained searches running off of it!
Hi Jananie,  just to clarify all open cases will be migrated and available in the new Cisco Support system,  Support Case Manager regardless of age.  In addition you'll get the last 30 days of Closed... See more...
Hi Jananie,  just to clarify all open cases will be migrated and available in the new Cisco Support system,  Support Case Manager regardless of age.  In addition you'll get the last 30 days of Closed cases (ie:  from May 14 - June 14th).  I assuming for your use case above you're mostly interested in open cases (current issues) so you should be well covered.