All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @g_cremin  I believe "actions" should be an array of actions, not a dict? This might be affecting things. ... "actions": [ { "action":"test_connectivity", "identifier": "tes... See more...
Hi @g_cremin  I believe "actions" should be an array of actions, not a dict? This might be affecting things. ... "actions": [ { "action":"test_connectivity", "identifier": "test_connectivity", "description": "Tests connectivity to Wazuh", "type": "test", "read_only": true, "parameters": [], "output": [] } ], ... For more detail on the app.json schema check out https://docs.splunk.com/Documentation/SOAR/current/DevelopApps/Metadata  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi To reset the admin password ensure you are stopping Splunk completely before deleting the passwd file. # Stop Splunk Enterprise cd $SPLUNK_HOME/bin ./splunk stop # Remove the password file rm $... See more...
Hi To reset the admin password ensure you are stopping Splunk completely before deleting the passwd file. # Stop Splunk Enterprise cd $SPLUNK_HOME/bin ./splunk stop # Remove the password file rm $SPLUNK_HOME/etc/passwd Now create a user-seed file ($SPLUNK_HOME/etc/user-seed.conf [user_info] USERNAME = admin PASSWORD = YourPassword Once done, start Splunk $SPLUNK_HOME/bin/splunk start You should now be able to login with the user/password set in the user-seed.conf file  For more info check the following docs page: https://docs.splunk.com/Documentation/Splunk/latest/Security/Secureyouradminaccount#Reset_the_administrator_password Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @Abass42  You're right in that editing historic data in Splunk isnt really possible. (You can delete data if you have the can_delete capability though).  What I'm wondering is that one of 2 thin... See more...
Hi @Abass42  You're right in that editing historic data in Splunk isnt really possible. (You can delete data if you have the can_delete capability though).  What I'm wondering is that one of 2 things may have happened. 1) The data has changed 2) Your field extractions have changed. They ultimately boil down to the same question - How does the "dlp_rule" field get defined? Is this an actual value in the _raw data (such as [time] - component=something dlp_rule=ABC user=Bob host=BobsLaptop ) OR is dlp_rule actually determined/eval/extracted from other data in the event such as a status code, or maybe a regular expression? If this is the case then the questions become, has the data format changed slightly? This could be something simple as an additional space or field in the raw data which has stopped the field extraction working, or, has the field extraction been changed at all? If you're able to provide a sample event then it might help - redacted of course. Another thing which you could do if you are unsure on what fields are extracted etc is run a btool on your SearchHead (if you are running onprem) such as: /opt/splunk/bin/splunk cmd btool props list netskope:application  Are you able to look at a raw historical event where you go a match you expected and compare to a recent event to see if there are any differences in the event?   Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @T2  The Cisco Security Cloud app does have a Duo Overview (Dashboard Studio) dashboard but this is only high level  and not the same as the 7 (Classic XML) dashboards in the Duo app. The Duo app... See more...
Hi @T2  The Cisco Security Cloud app does have a Duo Overview (Dashboard Studio) dashboard but this is only high level  and not the same as the 7 (Classic XML) dashboards in the Duo app. The Duo app uses a static source=duo and a macro to define the Duo index, whereas the Cisco Security Cloud app uses sourcetypes such as "cisco:duo:authentication" and also a Data Model for consuming the data via the overview dashboard. Ultimately I think the answer is Yes - If you have dashboards/searches built on the existing Duo app feed then you are likely going to need to update these to reflect the data coming in via the new app. I would recommend running the Cisco app in development environment or locally, if possible, so that you can compare the data side-by-side and work to retain parity between the apps before migrating your production environment.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hello, all i am fairly new to the Splunk community and I'm attempting to reset my Splunk admin password and for whatever reason it does not work i go and delete the "etc/passwd" and restart my Splunk... See more...
Hello, all i am fairly new to the Splunk community and I'm attempting to reset my Splunk admin password and for whatever reason it does not work i go and delete the "etc/passwd" and restart my Splunk instance and attempt to login to the web interface, but it never prompts me for a reset. I have even tried commands to do it manually, but nothing works. Has anyone else had a problem like this? 
Here is instructions how todo it https://docs.splunk.com/Documentation/Splunk/9.4.1/Knowledge/Manageknowledgeobjectpermissions#Enable_a_role_other_than_admin_and_power_to_set_permissions_and_share_ob... See more...
Here is instructions how todo it https://docs.splunk.com/Documentation/Splunk/9.4.1/Knowledge/Manageknowledgeobjectpermissions#Enable_a_role_other_than_admin_and_power_to_set_permissions_and_share_objects This is not a capability, instead of it, it’s needing a role which have write access to current app where dashboard is and also user must own this dashboard. Only users with admin role can share other people’s KOs.
Will any of the knowledge objects or dashboards be affected once the add-on is applied when moving from DUO to Cisco?
Does this lantern article helps you? Watch also the video clip https://lantern.splunk.com/Splunk_Platform/Product_Tips/Data_Management/Using_ingest_actions_in_Splunk_Enterprise. Another example how t... See more...
Does this lantern article helps you? Watch also the video clip https://lantern.splunk.com/Splunk_Platform/Product_Tips/Data_Management/Using_ingest_actions_in_Splunk_Enterprise. Another example how to use it with pan logs https://lantern.splunk.com/Data_Descriptors/Palo_Alto_Networks/Using_ingest_actions_to_filter_Palo_Alto_logs.
It sounds like you've settled on what might be a unsuitable solution to the problem.  Tell us more about the problem itself and we may be able to suggest a better solution. Lookup tables are for enr... See more...
It sounds like you've settled on what might be a unsuitable solution to the problem.  Tell us more about the problem itself and we may be able to suggest a better solution. Lookup tables are for enriching events with additional fields based one or more fields already in the events.  It's not a conditional-execution mechanism. If this part of a dashboard (or can be made into a dashboard) then you have better options.  You can have inputs the user can select to determine which calculations are made.  That is well-trodden ground so let us know if that path sounds feasible.
To add a bit of additional context to what's already been said - actually while most of the "other" Splunk components should be able to communicate with each other (or at least should be able to be a... See more...
To add a bit of additional context to what's already been said - actually while most of the "other" Splunk components should be able to communicate with each other (or at least should be able to be able), forwarders are often (usually) in remote sites and environments which are completely separate from the "main" Splunk infrastructure so in many cases querying them directly doesn't make much sense. So yes, for _some_ HFs a separate role could be beneficial but there can be many HFs (and most UFs) which you should simply have no access to. And that's also why app management with DS works in pull mode - you serve your apps from the DS but it's the deployment clients (usually forwarders) which pull their apps from DS and you have no way of forcing them to do so. They have their interval with which they "phone home" and that's it.
Ingest actions are exactly what I am looking for. While not as intuitive as an entire tool dedicated to modifying data, I think this would do the trick, as long as i can trim out field values before ... See more...
Ingest actions are exactly what I am looking for. While not as intuitive as an entire tool dedicated to modifying data, I think this would do the trick, as long as i can trim out field values before forwarding them to an indexer for ingestion.    I am looking for docs explaining how to do just that, but I am struggling to find step by step instructions. Can you send me some good docs that show how to use expressions to do what I want.    Thank you. This may be my ticket to saving us hundreds of thousands of dollars in licensing costs. 
yeah we make adjustments with new indexes, however, the large indexes were created before i got hired. so im actively trying to reduce ingest with whats already flowing. great advice btw.
Brett - do you have any further guidance on making this app (7371) work?  We are trying to ingest Atlassian logs from a trusted partner to our Splunk.  They pointed us to APP 7371, we installed.  But... See more...
Brett - do you have any further guidance on making this app (7371) work?  We are trying to ingest Atlassian logs from a trusted partner to our Splunk.  They pointed us to APP 7371, we installed.  But don't see any options for configuration?  Not like we're used to with other apps, anyway.  No "input" tab, no "configuration" tab, no "proxy" tab.   We get one page with 'name', 'update checking', 'visible' and 'upload asset' .  nothing else.  no place to enter the API key they sent us, nowhere to enter file path.  Nothing.  At this point we have the app installed but no idea how to get the logs to come over.
It should be stated up-front that indexes cannot be reduced in size.  You must wait for buckets to be frozen for data to go away.  The best you can do is reduce how much is stored in new buckets. Yo... See more...
It should be stated up-front that indexes cannot be reduced in size.  You must wait for buckets to be frozen for data to go away.  The best you can do is reduce how much is stored in new buckets. You've already taken a good first step by eliminating duplicate events. Next, look at indexed fields.  Fields are best extracted at search-time rather than at index-time.  Doing so helps indexer performance, saves space in the indexes, and offers more flexibility with fields. Look at the INDEXED_EXTRACTIONS settings in your props.conf files. Each of them will create index-time fields.  JSON data is especially verbose so KV_MODE=json should be used, instead.
I totally agree with @PickleRick you should disable your swap at least temporarily and after you have confirmed that everything is working and/or fix the root cause for swap usage then remove it perm... See more...
I totally agree with @PickleRick you should disable your swap at least temporarily and after you have confirmed that everything is working and/or fix the root cause for swap usage then remove it permanently. When you have dedicated servers for splunk those should sized correctly to run your normal workload. 
You need to remember that setting a new sourcetype value for your event, don’t start to travel ingesting pipeline again! So don’t expect that setting sourcetype as B then it apply those definitions t... See more...
You need to remember that setting a new sourcetype value for your event, don’t start to travel ingesting pipeline again! So don’t expect that setting sourcetype as B then it apply those definitions to that event. No it just go forward with sourcetype A settings.
You could also use site0 as site information instead of site1 or site2. Then it manages searches little bit differently than using exact site<#>. You found more information from those docs which are ... See more...
You could also use site0 as site information instead of site1 or site2. Then it manages searches little bit differently than using exact site<#>. You found more information from those docs which are pointed to you.
Are you try to put it into _time field instead of timestamp? Then just modify @livehybrid INGEST_EVAL example to put that value into _time field instead of timestamp. Remember to remove json_set part ... See more...
Are you try to put it into _time field instead of timestamp? Then just modify @livehybrid INGEST_EVAL example to put that value into _time field instead of timestamp. Remember to remove json_set part also.
Basically you could use token with SSO user like SAML, but if I recall correctly there could be a situations when old SSO authentication cache/token/credential could vanish and then it needs that thi... See more...
Basically you could use token with SSO user like SAML, but if I recall correctly there could be a situations when old SSO authentication cache/token/credential could vanish and then it needs that this user must login again via GUI to get it works again. If you are using token for user which are using GUI regularly probably this isn’t any real issue? But if you are adding token to any service user which are using only REST api then this could easily hit you. For that reason you should use local user in this kind of cases if your company policy allow it.
There are several ways to trim your data before indexing it into disk. The best option depends on your environment and which kind of data and use case you have. Traditional way is use props and tran... See more...
There are several ways to trim your data before indexing it into disk. The best option depends on your environment and which kind of data and use case you have. Traditional way is use props and transforms.conf files to do this. It works with all splunk environments, but it can be little bit challenging if you haven’t use it earlier! Here is link for documentation https://docs.splunk.com/Documentation/Splunk/latest/Forwarding/Routeandfilterdatad There are lot of examples in community and other pages for that, just ask google to find those. Another option is use Edge Processor. It’s newer and probably easier to use and understand, but currently it needs a splunk cloud stack to manage configurations, even it can work independently on onprem too after configuration. Here is more about it https://docs.splunk.com/Documentation/SplunkCloud/9.3.2408/EdgeProcessor/FilterPipeline As I said currently only with SCP, but it’s coming also into onprem in future. Last on prem version is ingest actions which works both on prem an SCP too. https://docs.splunk.com/Documentation/SplunkCloud/latest/Data/DataIngest And if you are in SCP and are ingesting there then last option is ingest processor. https://docs.splunk.com/Documentation/SplunkCloud/9.3.2411/IngestProcessor/FilterPipeline r. Ismo