All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @Abass42  You're right in that editing historic data in Splunk isnt really possible. (You can delete data if you have the can_delete capability though).  What I'm wondering is that one of 2 thin... See more...
Hi @Abass42  You're right in that editing historic data in Splunk isnt really possible. (You can delete data if you have the can_delete capability though).  What I'm wondering is that one of 2 things may have happened. 1) The data has changed 2) Your field extractions have changed. They ultimately boil down to the same question - How does the "dlp_rule" field get defined? Is this an actual value in the _raw data (such as [time] - component=something dlp_rule=ABC user=Bob host=BobsLaptop ) OR is dlp_rule actually determined/eval/extracted from other data in the event such as a status code, or maybe a regular expression? If this is the case then the questions become, has the data format changed slightly? This could be something simple as an additional space or field in the raw data which has stopped the field extraction working, or, has the field extraction been changed at all? If you're able to provide a sample event then it might help - redacted of course. Another thing which you could do if you are unsure on what fields are extracted etc is run a btool on your SearchHead (if you are running onprem) such as: /opt/splunk/bin/splunk cmd btool props list netskope:application  Are you able to look at a raw historical event where you go a match you expected and compare to a recent event to see if there are any differences in the event?   Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @T2  The Cisco Security Cloud app does have a Duo Overview (Dashboard Studio) dashboard but this is only high level  and not the same as the 7 (Classic XML) dashboards in the Duo app. The Duo app... See more...
Hi @T2  The Cisco Security Cloud app does have a Duo Overview (Dashboard Studio) dashboard but this is only high level  and not the same as the 7 (Classic XML) dashboards in the Duo app. The Duo app uses a static source=duo and a macro to define the Duo index, whereas the Cisco Security Cloud app uses sourcetypes such as "cisco:duo:authentication" and also a Data Model for consuming the data via the overview dashboard. Ultimately I think the answer is Yes - If you have dashboards/searches built on the existing Duo app feed then you are likely going to need to update these to reflect the data coming in via the new app. I would recommend running the Cisco app in development environment or locally, if possible, so that you can compare the data side-by-side and work to retain parity between the apps before migrating your production environment.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hello, all i am fairly new to the Splunk community and I'm attempting to reset my Splunk admin password and for whatever reason it does not work i go and delete the "etc/passwd" and restart my Splunk... See more...
Hello, all i am fairly new to the Splunk community and I'm attempting to reset my Splunk admin password and for whatever reason it does not work i go and delete the "etc/passwd" and restart my Splunk instance and attempt to login to the web interface, but it never prompts me for a reset. I have even tried commands to do it manually, but nothing works. Has anyone else had a problem like this? 
Here is instructions how todo it https://docs.splunk.com/Documentation/Splunk/9.4.1/Knowledge/Manageknowledgeobjectpermissions#Enable_a_role_other_than_admin_and_power_to_set_permissions_and_share_ob... See more...
Here is instructions how todo it https://docs.splunk.com/Documentation/Splunk/9.4.1/Knowledge/Manageknowledgeobjectpermissions#Enable_a_role_other_than_admin_and_power_to_set_permissions_and_share_objects This is not a capability, instead of it, it’s needing a role which have write access to current app where dashboard is and also user must own this dashboard. Only users with admin role can share other people’s KOs.
Will any of the knowledge objects or dashboards be affected once the add-on is applied when moving from DUO to Cisco?
Does this lantern article helps you? Watch also the video clip https://lantern.splunk.com/Splunk_Platform/Product_Tips/Data_Management/Using_ingest_actions_in_Splunk_Enterprise. Another example how t... See more...
Does this lantern article helps you? Watch also the video clip https://lantern.splunk.com/Splunk_Platform/Product_Tips/Data_Management/Using_ingest_actions_in_Splunk_Enterprise. Another example how to use it with pan logs https://lantern.splunk.com/Data_Descriptors/Palo_Alto_Networks/Using_ingest_actions_to_filter_Palo_Alto_logs.
It sounds like you've settled on what might be a unsuitable solution to the problem.  Tell us more about the problem itself and we may be able to suggest a better solution. Lookup tables are for enr... See more...
It sounds like you've settled on what might be a unsuitable solution to the problem.  Tell us more about the problem itself and we may be able to suggest a better solution. Lookup tables are for enriching events with additional fields based one or more fields already in the events.  It's not a conditional-execution mechanism. If this part of a dashboard (or can be made into a dashboard) then you have better options.  You can have inputs the user can select to determine which calculations are made.  That is well-trodden ground so let us know if that path sounds feasible.
To add a bit of additional context to what's already been said - actually while most of the "other" Splunk components should be able to communicate with each other (or at least should be able to be a... See more...
To add a bit of additional context to what's already been said - actually while most of the "other" Splunk components should be able to communicate with each other (or at least should be able to be able), forwarders are often (usually) in remote sites and environments which are completely separate from the "main" Splunk infrastructure so in many cases querying them directly doesn't make much sense. So yes, for _some_ HFs a separate role could be beneficial but there can be many HFs (and most UFs) which you should simply have no access to. And that's also why app management with DS works in pull mode - you serve your apps from the DS but it's the deployment clients (usually forwarders) which pull their apps from DS and you have no way of forcing them to do so. They have their interval with which they "phone home" and that's it.
Ingest actions are exactly what I am looking for. While not as intuitive as an entire tool dedicated to modifying data, I think this would do the trick, as long as i can trim out field values before ... See more...
Ingest actions are exactly what I am looking for. While not as intuitive as an entire tool dedicated to modifying data, I think this would do the trick, as long as i can trim out field values before forwarding them to an indexer for ingestion.    I am looking for docs explaining how to do just that, but I am struggling to find step by step instructions. Can you send me some good docs that show how to use expressions to do what I want.    Thank you. This may be my ticket to saving us hundreds of thousands of dollars in licensing costs. 
yeah we make adjustments with new indexes, however, the large indexes were created before i got hired. so im actively trying to reduce ingest with whats already flowing. great advice btw.
Brett - do you have any further guidance on making this app (7371) work?  We are trying to ingest Atlassian logs from a trusted partner to our Splunk.  They pointed us to APP 7371, we installed.  But... See more...
Brett - do you have any further guidance on making this app (7371) work?  We are trying to ingest Atlassian logs from a trusted partner to our Splunk.  They pointed us to APP 7371, we installed.  But don't see any options for configuration?  Not like we're used to with other apps, anyway.  No "input" tab, no "configuration" tab, no "proxy" tab.   We get one page with 'name', 'update checking', 'visible' and 'upload asset' .  nothing else.  no place to enter the API key they sent us, nowhere to enter file path.  Nothing.  At this point we have the app installed but no idea how to get the logs to come over.
It should be stated up-front that indexes cannot be reduced in size.  You must wait for buckets to be frozen for data to go away.  The best you can do is reduce how much is stored in new buckets. Yo... See more...
It should be stated up-front that indexes cannot be reduced in size.  You must wait for buckets to be frozen for data to go away.  The best you can do is reduce how much is stored in new buckets. You've already taken a good first step by eliminating duplicate events. Next, look at indexed fields.  Fields are best extracted at search-time rather than at index-time.  Doing so helps indexer performance, saves space in the indexes, and offers more flexibility with fields. Look at the INDEXED_EXTRACTIONS settings in your props.conf files. Each of them will create index-time fields.  JSON data is especially verbose so KV_MODE=json should be used, instead.
I totally agree with @PickleRick you should disable your swap at least temporarily and after you have confirmed that everything is working and/or fix the root cause for swap usage then remove it perm... See more...
I totally agree with @PickleRick you should disable your swap at least temporarily and after you have confirmed that everything is working and/or fix the root cause for swap usage then remove it permanently. When you have dedicated servers for splunk those should sized correctly to run your normal workload. 
You need to remember that setting a new sourcetype value for your event, don’t start to travel ingesting pipeline again! So don’t expect that setting sourcetype as B then it apply those definitions t... See more...
You need to remember that setting a new sourcetype value for your event, don’t start to travel ingesting pipeline again! So don’t expect that setting sourcetype as B then it apply those definitions to that event. No it just go forward with sourcetype A settings.
You could also use site0 as site information instead of site1 or site2. Then it manages searches little bit differently than using exact site<#>. You found more information from those docs which are ... See more...
You could also use site0 as site information instead of site1 or site2. Then it manages searches little bit differently than using exact site<#>. You found more information from those docs which are pointed to you.
Are you try to put it into _time field instead of timestamp? Then just modify @livehybrid INGEST_EVAL example to put that value into _time field instead of timestamp. Remember to remove json_set part ... See more...
Are you try to put it into _time field instead of timestamp? Then just modify @livehybrid INGEST_EVAL example to put that value into _time field instead of timestamp. Remember to remove json_set part also.
Basically you could use token with SSO user like SAML, but if I recall correctly there could be a situations when old SSO authentication cache/token/credential could vanish and then it needs that thi... See more...
Basically you could use token with SSO user like SAML, but if I recall correctly there could be a situations when old SSO authentication cache/token/credential could vanish and then it needs that this user must login again via GUI to get it works again. If you are using token for user which are using GUI regularly probably this isn’t any real issue? But if you are adding token to any service user which are using only REST api then this could easily hit you. For that reason you should use local user in this kind of cases if your company policy allow it.
There are several ways to trim your data before indexing it into disk. The best option depends on your environment and which kind of data and use case you have. Traditional way is use props and tran... See more...
There are several ways to trim your data before indexing it into disk. The best option depends on your environment and which kind of data and use case you have. Traditional way is use props and transforms.conf files to do this. It works with all splunk environments, but it can be little bit challenging if you haven’t use it earlier! Here is link for documentation https://docs.splunk.com/Documentation/Splunk/latest/Forwarding/Routeandfilterdatad There are lot of examples in community and other pages for that, just ask google to find those. Another option is use Edge Processor. It’s newer and probably easier to use and understand, but currently it needs a splunk cloud stack to manage configurations, even it can work independently on onprem too after configuration. Here is more about it https://docs.splunk.com/Documentation/SplunkCloud/9.3.2408/EdgeProcessor/FilterPipeline As I said currently only with SCP, but it’s coming also into onprem in future. Last on prem version is ingest actions which works both on prem an SCP too. https://docs.splunk.com/Documentation/SplunkCloud/latest/Data/DataIngest And if you are in SCP and are ingesting there then last option is ingest processor. https://docs.splunk.com/Documentation/SplunkCloud/9.3.2411/IngestProcessor/FilterPipeline r. Ismo
I have some Netskope data. Searching it goes something like this: index=testing sourcetype="netskope:application" dlp_rule="AB C*" | lookup NetSkope_test.csv dlp_rule OUTPUT C_Label as "Label Name" ... See more...
I have some Netskope data. Searching it goes something like this: index=testing sourcetype="netskope:application" dlp_rule="AB C*" | lookup NetSkope_test.csv dlp_rule OUTPUT C_Label as "Label Name" | eval Date=strftime(_time, "%Y-%m-%d"), Time=strftime(_time, "%H:%M:%S") | rename user as User dstip as "Destination IP" dlp_file as File url as URL | table Date Time User URL File "Destination IP" User "Label Name"   I am tracking social security numbers and how many times one leaves the firm. I even mapped the specific dlp_rule values found to values like C1, C2, C3... When I had added this query, I had to update the other panels accordingly to track the total number of SSN leaving firm through various methods. On all of them, I had the above filter: index=testing sourcetype="netskope:application" dlp_rule="AB C*" And I am pretty sure I had results. Pretty much, for the dlp_rule value, I had strings like AB C*, and I had 5 distinct values I was mapping against.  Looking at the dataset now, a few months later, I dont see any values matching the above criteria, AB C*. I have 4 values, and the dlp_rule that has a null value appears over 38 million times. I think the null value is supposed to be the AB C*. I dont have any screen shots proving this though.  My question is, after discussing this with the client, what could have happened? When searching for all time, the above SS is what I get. If I understand how splunk works even vaguely, I dont believe Splunk has the power to go in and edit old ingested logs, in this case, go through and remove a specific value from all old logs of a specific data source. That doesnt make any logical sense. Both the client and I remember seeing the values specific above. They are going to contact netskope to see what happened, but as far as i know, I have not changed anything that is related to this data source.  Can old data change in Splunk? Can a new props.conf or transforms apply to old data?    Thank you for any guidance. 
I have a unique situation with my customer. I want to create a lookup table that the customer can put  fields they want the value for. lookup has column called fieldvalue . ie. CPU in the list.  if... See more...
I have a unique situation with my customer. I want to create a lookup table that the customer can put  fields they want the value for. lookup has column called fieldvalue . ie. CPU in the list.  if that field is cpu is in the table for instance, then we have to run a calculation with the Cpu field. for all the events who have cpu.  fields customer selects are number fields. The things i have tried are not returning the value in the cpu field.  Without discussing customer stuff, using calculated fields won't work, KPI stuff won't work. For what they want, I need to do it this way.