All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

There are several ways to trim your data before indexing it into disk. The best option depends on your environment and which kind of data and use case you have. Traditional way is use props and tran... See more...
There are several ways to trim your data before indexing it into disk. The best option depends on your environment and which kind of data and use case you have. Traditional way is use props and transforms.conf files to do this. It works with all splunk environments, but it can be little bit challenging if you haven’t use it earlier! Here is link for documentation https://docs.splunk.com/Documentation/Splunk/latest/Forwarding/Routeandfilterdatad There are lot of examples in community and other pages for that, just ask google to find those. Another option is use Edge Processor. It’s newer and probably easier to use and understand, but currently it needs a splunk cloud stack to manage configurations, even it can work independently on onprem too after configuration. Here is more about it https://docs.splunk.com/Documentation/SplunkCloud/9.3.2408/EdgeProcessor/FilterPipeline As I said currently only with SCP, but it’s coming also into onprem in future. Last on prem version is ingest actions which works both on prem an SCP too. https://docs.splunk.com/Documentation/SplunkCloud/latest/Data/DataIngest And if you are in SCP and are ingesting there then last option is ingest processor. https://docs.splunk.com/Documentation/SplunkCloud/9.3.2411/IngestProcessor/FilterPipeline r. Ismo
I have some Netskope data. Searching it goes something like this: index=testing sourcetype="netskope:application" dlp_rule="AB C*" | lookup NetSkope_test.csv dlp_rule OUTPUT C_Label as "Label Name" ... See more...
I have some Netskope data. Searching it goes something like this: index=testing sourcetype="netskope:application" dlp_rule="AB C*" | lookup NetSkope_test.csv dlp_rule OUTPUT C_Label as "Label Name" | eval Date=strftime(_time, "%Y-%m-%d"), Time=strftime(_time, "%H:%M:%S") | rename user as User dstip as "Destination IP" dlp_file as File url as URL | table Date Time User URL File "Destination IP" User "Label Name"   I am tracking social security numbers and how many times one leaves the firm. I even mapped the specific dlp_rule values found to values like C1, C2, C3... When I had added this query, I had to update the other panels accordingly to track the total number of SSN leaving firm through various methods. On all of them, I had the above filter: index=testing sourcetype="netskope:application" dlp_rule="AB C*" And I am pretty sure I had results. Pretty much, for the dlp_rule value, I had strings like AB C*, and I had 5 distinct values I was mapping against.  Looking at the dataset now, a few months later, I dont see any values matching the above criteria, AB C*. I have 4 values, and the dlp_rule that has a null value appears over 38 million times. I think the null value is supposed to be the AB C*. I dont have any screen shots proving this though.  My question is, after discussing this with the client, what could have happened? When searching for all time, the above SS is what I get. If I understand how splunk works even vaguely, I dont believe Splunk has the power to go in and edit old ingested logs, in this case, go through and remove a specific value from all old logs of a specific data source. That doesnt make any logical sense. Both the client and I remember seeing the values specific above. They are going to contact netskope to see what happened, but as far as i know, I have not changed anything that is related to this data source.  Can old data change in Splunk? Can a new props.conf or transforms apply to old data?    Thank you for any guidance. 
I have a unique situation with my customer. I want to create a lookup table that the customer can put  fields they want the value for. lookup has column called fieldvalue . ie. CPU in the list.  if... See more...
I have a unique situation with my customer. I want to create a lookup table that the customer can put  fields they want the value for. lookup has column called fieldvalue . ie. CPU in the list.  if that field is cpu is in the table for instance, then we have to run a calculation with the Cpu field. for all the events who have cpu.  fields customer selects are number fields. The things i have tried are not returning the value in the cpu field.  Without discussing customer stuff, using calculated fields won't work, KPI stuff won't work. For what they want, I need to do it this way.
As @PickleRick said many of us add those as an indexer into MC. I also add several additional custom groups to those. This helps me to avoid those false alerts and getting real status and statistics ... See more...
As @PickleRick said many of us add those as an indexer into MC. I also add several additional custom groups to those. This helps me to avoid those false alerts and getting real status and statistics from indexers by selecting correct group on dashboards. There is idea on ideas.splunk.com to add own role for HF in MC. https://ideas.splunk.com/ideas/EID-I-73 This seems to be a future prospect, so maybe we finally get this into MC. Currently UFs don’t listen REST api by default from network. I haven’t tried to enable it and try to query those as I haven’t seen any benefits for it. You can see those enough well in forwarder management page. Another reason is that those doesn’t collect some introspection metrics by default and some cannot collect w/o adding separate TAs into those.
Thanks for the response. I was able to get everything sorted. We are trying to reduce our license, so if we can trim up data and remove unwanted fields and then ingest it, that would be ideal. Where ... See more...
Thanks for the response. I was able to get everything sorted. We are trying to reduce our license, so if we can trim up data and remove unwanted fields and then ingest it, that would be ideal. Where in the pipeline does data count towards the splunk license? Can we apply props.conf and transforms.conf to modify and trim the data? If i wanted to remove 5 fields from a log being ingested, would the above approach apply? And if so, if I trim it up before ingesting, would that save on our license?    Thanks
The best options is to define your use cases and based on those remove unused values before indexing events into disk. But this leads you a situation when you realize a new use case then you must upd... See more...
The best options is to define your use cases and based on those remove unused values before indexing events into disk. But this leads you a situation when you realize a new use case then you must update your indexing definitions to get a new values into splunk.  One thing what you could look is to check that those events don’t contain same information twice or even more times. This can happen when you have some code on your data and then the same information has added as a clear text. A good example is Windows event logs where this happens. There are also some other cases what you could do like remove additional formatting like json objects contain additional spaces remove unnecessary line breaks check if you could utilize metrics indexes for some data instead of putting everything in event indexes 
Hello Everyone, I'm trying to create an app in Splunk-SOAR version 6.4.0.92, using miminal code but keeps getting this error 'str' object has no attribute 'get'  -  when I try to install it on the ap... See more...
Hello Everyone, I'm trying to create an app in Splunk-SOAR version 6.4.0.92, using miminal code but keeps getting this error 'str' object has no attribute 'get'  -  when I try to install it on the apps section of Splunk-SOAR dashboard.  Can anyone help with this please Error Message app.json    
This depends on what changes you are deploying. Some needs restart and son don’t. You can found more information from docs.splunk.com.  Here is a basic information of clustering https://docs.spl... See more...
This depends on what changes you are deploying. Some needs restart and son don’t. You can found more information from docs.splunk.com.  Here is a basic information of clustering https://docs.splunk.com/Documentation/Splunk/9.4.1/Indexer/Basicclusterarchitecture You should read and try to understand what it means. Unfortunately that doc doesn’t tell everything as it takes too much space and to be honest most of us don’t need to know all those details.
@isoutamo @PickleRick so everytime I push configuration bundle from CM to indexers, rolling restart will be happened everytime for indexers? 
Hi @g_cremin  Are you able to share your code, please? This error occurs when your Python code is attempting to use the .get() method on a variable that holds a string value. The .get() method is d... See more...
Hi @g_cremin  Are you able to share your code, please? This error occurs when your Python code is attempting to use the .get() method on a variable that holds a string value. The .get() method is designed for dictionaries to retrieve values associated with keys, not for strings.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
Trying to create an app in Splunk-SOAR version 6.4.0.92, using miminal code but keeps getting this error 'str' object has no attribute 'get'  - 
I wouldn't call it broken. Short of rebalancing buckets between restarting each single indexer (which is completely ridiculous) you can't make sure that every bucket is searchable throughout the roll... See more...
I wouldn't call it broken. Short of rebalancing buckets between restarting each single indexer (which is completely ridiculous) you can't make sure that every bucket is searchable throughout the rolling restart process. If you have RF=SF>=2, then it's all just a matter of reassigning primaries and maybe restarting only one indexer at a time (which can be a long process if you have a big setup). But what if you have RF=SF=1? (yes, I've seen such setups). So yes, it's a bit frustrating but I wouldn't call it broken.
Probably most important thing is that rolling restarts affects your searches every time when this happens even there is options to avoid it. The most important thing is that it affects all alerts, re... See more...
Probably most important thing is that rolling restarts affects your searches every time when this happens even there is options to avoid it. The most important thing is that it affects all alerts, reports etc which are running when indexers or SH in SHC are restarted by rolling restart. The implementation is somehow broken still. There are ideas to fix this in ideas.splunk.com, but no estimates if splunk have plan and capabilities to fix it.
understood, would you happen to have any advice on cleaning a big index?
I accessed the page below, registered with my information, and when I clicked the email button, I received the error shown in the image. https://www.splunk.com/en_us/download/splunk-cloud.html Now ... See more...
I accessed the page below, registered with my information, and when I clicked the email button, I received the error shown in the image. https://www.splunk.com/en_us/download/splunk-cloud.html Now I can't even access the Splunk website because this is what I see:     I'm from Brazil, if that helps in any way. So, what should I do? __________________________________________________________________   UPDATE:   Apparently this is a Chrome browser issue, as I was able to log in and out multiple times in Microsoft Edge without any problems! From there, I can start my free trial! So I guess the solution is to change browsers!  
You need to unmount "/opt/splunk/var/lib/splunk/kvstore/mongo" folder. Eg. in docker-compose volumes: - "/home/docker_volumes/etc:/opt/splunk/etc" - "/home/docker_volumes/var:/opt/splunk/var" - ... See more...
You need to unmount "/opt/splunk/var/lib/splunk/kvstore/mongo" folder. Eg. in docker-compose volumes: - "/home/docker_volumes/etc:/opt/splunk/etc" - "/home/docker_volumes/var:/opt/splunk/var" - "/opt/splunk/var/lib/splunk/kvstore/mongo"
I assume app should be [install]  state = disable or disabled ?
MC doesn't normally directly monitor forwarders. It can do indirect monitoring by checking their logs in _internal index. Sometimes people add HFs to MC with indexer role but AFAIR it causes false a... See more...
MC doesn't normally directly monitor forwarders. It can do indirect monitoring by checking their logs in _internal index. Sometimes people add HFs to MC with indexer role but AFAIR it causes false alerts since HFs don't actually do indexing.
Hi @livehybrid Thank you for your answer, but it didn't solve my problem unfortunately. I'm currently on a On-prem enviroment, and the workaround that i found was to put the verify parameter (t... See more...
Hi @livehybrid Thank you for your answer, but it didn't solve my problem unfortunately. I'm currently on a On-prem enviroment, and the workaround that i found was to put the verify parameter (this one directly in the curl.py) to false. line 99 r = requests.post(uri,data=payload,verify=False,cert=cert,headers=headers,timeout=timeout) Maybe not the best, but it's working.
How we can get the health status of the HF,UF and IHF which are connected to DS while using the rest am able to see the health for the MC ,CM, LM,DS, Deployer and IDX etc but not able to get the stat... See more...
How we can get the health status of the HF,UF and IHF which are connected to DS while using the rest am able to see the health for the MC ,CM, LM,DS, Deployer and IDX etc but not able to get the status health which is in Red Yellow green and not getting . Rest which am using is - | rest /services/server/health on MC am able to see health status of  MC ,CM, LM,DS, Deployer and IDX but not for forwarders also while am running the same query opening any of the HF U.I am able to see there health results.