All Posts

Top

All Posts

Hi Team,  Good Morning. My name is Manish and I am totally new to the world of splunk.    Can anyone help me learn splunk from scratch with respect to cyber security and networking jobs. I am a ke... See more...
Hi Team,  Good Morning. My name is Manish and I am totally new to the world of splunk.    Can anyone help me learn splunk from scratch with respect to cyber security and networking jobs. I am a keen learner and I want all the help possible. This help I want as a favour. OR Kindly guide me to the best platform for splunk certification which has maximum possibility of lending jobs in india.  Note - I do not know anything about the splunk world right now.    thanks a lot.  Regards 
thanks very much.    I  am new learner and starting from scratch. Can you or someone help me learn splunk from scratch as a favour?
I did actually..  Just found out that you need to try with the indexer instead of Search Head. And also, attach an IAM role to your Splunk server, and  use the ARN of that same role to attach to your... See more...
I did actually..  Just found out that you need to try with the indexer instead of Search Head. And also, attach an IAM role to your Splunk server, and  use the ARN of that same role to attach to your Splunk config.
I do not know much about Cribl, but these settings in props.conf might help: props.conf on UF: [test] EVENT_BREAKER_ENABLE=true EVENT_BREAKER=([\r\n]+)\{ \"__CURSOR\" props.conf on Indexer: (assum... See more...
I do not know much about Cribl, but these settings in props.conf might help: props.conf on UF: [test] EVENT_BREAKER_ENABLE=true EVENT_BREAKER=([\r\n]+)\{ \"__CURSOR\" props.conf on Indexer: (assuming REALTIME_TIMESTAMP is the timestamp field) [test] KV_MODE=JSON SHOULD_LINEMERGE=false LINE_BREAKER=([\r\n]+)\{ \"__CURSOR\" MUST_BREAK_AFTER=\} TIME_PREFIX=\"__REALTIME_TIMESTAMP\"\s\:\s\" TIME_FORMAT=%s%6N MAX_TIMESTAMP_LOOKAHEAD=18  
yes, the data is sent from the Splunk UF --> Cribl (Stream / Worker) --> Splunk Indexer
So after much thought and deliberation, this is how you can see the real-time EPS and the trends around it on UBA. 1. You would need to add the parameter ?system in the url right before the # valu... See more...
So after much thought and deliberation, this is how you can see the real-time EPS and the trends around it on UBA. 1. You would need to add the parameter ?system in the url right before the # values. 2. Once done, proceed to Manage -> Data Sources -> Select Data Source to reveal the real-time EPS and trends associated with it.
Dear Regina, I am more interested in closed cases. Let me explain my view. For eg. I get an issue with JAVA agent working with some different application which is rarely used by customers around the... See more...
Dear Regina, I am more interested in closed cases. Let me explain my view. For eg. I get an issue with JAVA agent working with some different application which is rarely used by customers around the world. And such case was already experienced by some customer and support has provided a solution after lot of troubleshooting and resolved it. Now that ticket was closed back in 2023. It is a collection of brainstorms from experts and a great knowledge base. If the closed cases become volatile with this new migration, then when the similar issue occurs support team, consultants and customers have to again sit for hours to find out the solution.  There are many instances similar to this. It will take years for Cisco to build such a valuable knowledge base again. My humble request is to have the database of older just as reference point instead of deleting it forever. Thanks for considering my request. Jananie
You mentioned in your post you are using UF to send the data. Is the data going from Splunk UF --> Cribl --> Splunk indexer?
Hi @ww9rivers  Firstly, the users are granted Splunk roles based on their LDAP group in authentication.conf, and those roles or roles those roles inherit would restrict access to indexes with srch... See more...
Hi @ww9rivers  Firstly, the users are granted Splunk roles based on their LDAP group in authentication.conf, and those roles or roles those roles inherit would restrict access to indexes with srchIndexesAllowed in authorize.conf. So if users can log in to Splunk, then the Splunk roles would apply. So having Splunk cloud and on-premise LDAP shouldn't make a difference. My only guess as to the cause of this issue is, there is some role which the user has which is overriding the permissions of the owner. You will note in the documentation that the search is not actually run as the owner, but rather with the permissions of  the owner. To narrow down the issue, create a test user from GUI, and add the roles which the user has to it one by one. Try running the saved search as the test user after each role is added, to see which role is causing the issue. 
BUMP! I am having the same issue with similar config. @himaniarora20 you didnt end up finding a resolution?
Hi @madhav_dholakia  Here's what you should put in the alert's config to achieve what you want: Search: | inputlookup <file> Subject: Selfheal Alert - $result.Customer$ - $result.CheckName$ -... See more...
Hi @madhav_dholakia  Here's what you should put in the alert's config to achieve what you want: Search: | inputlookup <file> Subject: Selfheal Alert - $result.Customer$ - $result.CheckName$ - $result.Device$ - $result.MonthYear$ - $result.Status$ Trigger: For each result Throttle: [check] Supress results contain-ing field value: Device (This will prevent Splunk sending out duplicate alerts for the same device) Suppress triggering for <some time period>. Set this for however often your lookup-populating report is scheduled to run
Hi @sonalpriya Are you asking which logs from Octopus should be ingested to Splunk via HEC? Or perhaps are you asking which Splunk internal logs will show the ingestion of Octopus logs?  
Hi @ELADMIN Would you please share the search query used to generate the chart in your screenshot?
Hi @Ahmed7312 would you please share a screenshot of the error?
Hi @wpb162  It could be that the removal of the users has not propagated to all members of the SHC yet. How many members are in your SHC? How long did you leave it after running the "splunk remove u... See more...
Hi @wpb162  It could be that the removal of the users has not propagated to all members of the SHC yet. How many members are in your SHC? How long did you leave it after running the "splunk remove user" command?
Here's a couple of things to check: 1. Check the settings you have set in props.conf are actually being applied to the sourcetype: $SPLUNK_HOME/bin/splunk cmd btool props list test1:sec 2. Check i... See more...
Here's a couple of things to check: 1. Check the settings you have set in props.conf are actually being applied to the sourcetype: $SPLUNK_HOME/bin/splunk cmd btool props list test1:sec 2. Check in the _internal logs for errors related to parsing for this sourcetype: index=_internal splunk_server=* source=*splunkd.log* (component=AggregatorMiningProcessor OR component=LineBreakingProcessor OR component=DateParserVerbose) (log_level=WARN OR log_level=ERROR) data_sourcetype="test1:sec"  
Wow. For all my queries, I had been using the following fields command under the assumption it did drop _raw.   | fields _time, xxx, yyy, zzz, ....     Then one day I started a large mvex... See more...
Wow. For all my queries, I had been using the following fields command under the assumption it did drop _raw.   | fields _time, xxx, yyy, zzz, ....     Then one day I started a large mvexpand and ran into memory limit. My thought upon seeing this was 'Huh? Well, worth a try I guess.'   | fields _time, xxx, yyy, zzz, .... | fields - _raw   Boom, mvexpand completes successfully. The heck? It actually cut the search time in half too.
After an investigation long story short, this is not possible and it needs to be a new feature suggestion if someone needs to request it. The explanation is that I looked at the network logs for the... See more...
After an investigation long story short, this is not possible and it needs to be a new feature suggestion if someone needs to request it. The explanation is that I looked at the network logs for the Dashboard Studio and found the payload for base and chain searches. Base search has their own parameter in the payload called 'search'. All chained searches are grouped together in a parameter called 'postprocess_searches'. There's no other parameters that support a third search parameter and call it 'append'. It is in fact non-existent with the payload structure. Furthermore, based on the name 'postprocess_searches' parameter, it is clear only the base search gets distributable commands. All post-process searches (chained searches) happen on the searchhead only. That is an important rule to keep in mind. If you want your search to be fast, then all the compute-heavy commands need to be in the base search. Unfortunately, that means you'd need your base search to be a relatively large table of all sourcetypes appended together into a single table and do whatever necessary aggregation is required. Then use chained searches to slice and dice this large table into small bits, such as dividing by 'sourcetype' to branch out the table into multiple smaller "base" tables as the basis for additional chained searches. In my case, I formulated my base search to be a merge of 3 different sourcetypes using stats join. It is reasonably fast with the ability to distribute the base search, despite having 15+ chained searches running off of it!
Hi Jananie,  just to clarify all open cases will be migrated and available in the new Cisco Support system,  Support Case Manager regardless of age.  In addition you'll get the last 30 days of Closed... See more...
Hi Jananie,  just to clarify all open cases will be migrated and available in the new Cisco Support system,  Support Case Manager regardless of age.  In addition you'll get the last 30 days of Closed cases (ie:  from May 14 - June 14th).  I assuming for your use case above you're mostly interested in open cases (current issues) so you should be well covered.
Anyone know of any examples on SplunkBase that have JavaScript-written commands using the Python SDK? I’ve written about a dozen custom commands using Python and a familiar with that process. The d... See more...
Anyone know of any examples on SplunkBase that have JavaScript-written commands using the Python SDK? I’ve written about a dozen custom commands using Python and a familiar with that process. The dev docs suggest the Splunk SDK for Python should be used for .JS commands but I’m not understanding how that’s possible without importing libraries like Flask. https://dev.splunk.com/enterprise/docs/devtools/customsearchcommands/nonpythonscscs