All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi scelikok, Thanks, that is a possible solution but not one feasible right now, partly because as you say it is expensive. I am working on testing the manual download, a conversion to CSV and th... See more...
Hi scelikok, Thanks, that is a possible solution but not one feasible right now, partly because as you say it is expensive. I am working on testing the manual download, a conversion to CSV and then upload to ES. I will update this post with my results.
Hi more changes needs to be done to fix wrong installation. In file $SPLUNK_HOME/etc/system/local/outputs.conf [indexAndForward] index = true selectiveIndexing = true [tcpout] forwardedindex.2.... See more...
Hi more changes needs to be done to fix wrong installation. In file $SPLUNK_HOME/etc/system/local/outputs.conf [indexAndForward] index = true selectiveIndexing = true [tcpout] forwardedindex.2.whitelist = (_audit|_internal|_introspection|_telemetry|_metrics|_metrics_rollup|_configtracker) original wrong is: forwardedindex.2.whitelist = (_audit|_internal|_introspection|_telemetry|_metrics|_metrics_rollup|_configtracker|_dsclient|_dsphonehome|_dsappevent)
Thank you for confirming.
Assuming your events are as you showed, try using extract | makeresults | fields - _time | eval _raw="[13.01.2025 15:45.50] [XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX] [XXXXXXXXXXXXXXXXXXXXXX] [INFO ] [A... See more...
Assuming your events are as you showed, try using extract | makeresults | fields - _time | eval _raw="[13.01.2025 15:45.50] [XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX] [XXXXXXXXXXXXXXXXXXXXXX] [INFO ] [Application_name]- Updating DB record with displayId=ABC0000000; type=TRANSFER [13.01.2025 15:45.50] [XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX] [XXXXXXXXXXXXXXXXXXXXXX] [INFO ] [Application_name]- Updating DB record with displayId=ABC0000000; type=MESSAGES [13.01.2025 15:45.50] [XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX] [XXXXXXXXXXXXXXXXXXXXXX] [INFO ] [Application_name]- Updating DB record with displayId=ABC0000000; type=POSTING [13.01.2025 15:45.50] [XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX] [XXXXXXXXXXXXXXXXXXXXXX] [INFO ] [Application_name]- Sending message to topic ver. 2.3 with displayId=ABC0000000" | multikv noheader=t | fields _raw ``` The lines above emulate the data you have shared and are unnecessary for your real data ``` | extract | fillnull value="SENDING" type | stats values(type) as types by displayId | where mvcount(types) != 4 or types != "SENDING"  
Does  index=WhatEverIndexTheseLogsAreIn type OR displayId   produce any of the logs you want?
Assuming type and displayId are already extracted, NO .. I am not able to join All 3 condition together for 1 id.  So I need full query to get the ids which are updating in all 3 DB but not updat... See more...
Assuming type and displayId are already extracted, NO .. I am not able to join All 3 condition together for 1 id.  So I need full query to get the ids which are updating in all 3 DB but not updating in kafka topic. 
Assuming type and displayId are already extracted, you could try something like this | fillnull value="SENDING" type | stats values(type) as types by displayId | where mvcount(types) != 4 or types !... See more...
Assuming type and displayId are already extracted, you could try something like this | fillnull value="SENDING" type | stats values(type) as types by displayId | where mvcount(types) != 4 or types != "SENDING"
Have you set-up any eventtypes or tagging?
If you re trying to do this in a dashboard, try setting a token to the variable part of the search and using that
In my logs I am getting 4 events for 1 id.  1)Updating DB record with displayId=ABC0000000; type=TRANSFER 2)Updating DB record with displayId=ABC0000000; type=MESSAGES 3)Updating DB record with ... See more...
In my logs I am getting 4 events for 1 id.  1)Updating DB record with displayId=ABC0000000; type=TRANSFER 2)Updating DB record with displayId=ABC0000000; type=MESSAGES 3)Updating DB record with displayId=ABC0000000; type=POSTING 4)Sending message to  topic ver. 2.3 with displayId=ABC0000000 Sample logs: [13.01.2025 15:45.50] [XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX] [XXXXXXXXXXXXXXXXXXXXXX] [INFO ] [Application_name]- Updating DB record with displayId=ABC0000000; type=TRANSFER I want to get the list of all those ids which have all 3 events like "Updating DB........." but missing "Sending message to  topic ........."
We deleted Python 2.7.
No matter which version you upgrade to you can always reference the manifest file.  If the manifest does not list the file in the warning then it can be deleted.
Ended up going with this that works pretty good. [My Search] | timechart span=$span$d dc(userSesnId) as count_of_user_sessions | trendline sma$sma$(count_of_user_sessions) as "Moving Average" |... See more...
Ended up going with this that works pretty good. [My Search] | timechart span=$span$d dc(userSesnId) as count_of_user_sessions | trendline sma$sma$(count_of_user_sessions) as "Moving Average" | rename count_of_user_sessions AS "Disctinct Count of User Sessions"
Hi All, I have a main search where name1 filed will have multiple values I need to run sub search based on the value of name1.  The structure goes like this: mail_search which has name1=a sub... See more...
Hi All, I have a main search where name1 filed will have multiple values I need to run sub search based on the value of name1.  The structure goes like this: mail_search which has name1=a sub search if name1=a   then run search1 if name1=b   then run search2 I have tried this with the following code:     | makeresults | eval name1="a" | eval condition=case(name1="a", "index=_internal | head 1 | eval val=\"Query for a1\" | table val", name1="b", "index=_internal | head 1 | eval val=\"Query for b\" | table val", 1=1, "search index=_internal | head 1 | eval val=\"Default query\" | table val") |table condition | map search=$condition$     I am getting the following error Unable to run query '"index=_internal | head 1 | eval val=\"Query for a1\" | table val"'.
It's exactly like @PickleRick said. There is no way to ensure that you can do that all in one phase without data leaks. Especially when you didn't know where and which kind of data you will receive f... See more...
It's exactly like @PickleRick said. There is no way to ensure that you can do that all in one phase without data leaks. Especially when you didn't know where and which kind of data you will receive from source systems. Then only way try to get this working even somehow it strict data onboarding and change management process with separate dev/test environment where all data and changes are first integrated. Then you need some way to analyze that data and ensure that there haven't been any data leaks which contains e.g. SSN, IBAN, or other secure PII data. And as it said the format options for those are actually countless (believe or not, but you will see it). After you are absolutely sure that your masking etc. is working then you can do production integration. But you will see that earlier or later someone will make some "emergency" change or something and then you have those events in your production After when this has happened you have unmasked data on Splunk indexes and there is no way to mask it in search time so that no-one cannot see it. Event using delete command is not enough as those events are still on buckets and if you have access to your storage layer you can get that data out there. Then only way is delete the index ensure that those are overwritten enough many times and then reingest that data. r. Ismo
Apart from what @gcusello and @PickleRick have said (which I agree with), this "problem" should be fixed at source. Your organisation should fully justify why sensitive data such as credit card numbe... See more...
Apart from what @gcusello and @PickleRick have said (which I agree with), this "problem" should be fixed at source. Your organisation should fully justify why sensitive data such as credit card numbers and social security numbers are being saved unobfuscated in logs for Splunk to ingest in the first place. Ideally, all such data should be stored in a secure place which is only retrievable by a few trusted people who access it for legitimate reasons. Anything else, is possibly a violation of your customer privacy protection rights.
OK. Several things here. 1. For a question starting with "what is the best way", especially if no boundary conditions are given, the answer is usually "it depends". 2. From my experience - the wors... See more...
OK. Several things here. 1. For a question starting with "what is the best way", especially if no boundary conditions are given, the answer is usually "it depends". 2. From my experience - the worse problem definition - the less reliable outcome. I've dealt with customers who wanted something just "configured so it works" (we're not necessarily talking about Splunk, just a general idea) and the result was usually less than stellar. Your problem is rooted in the compliance but it's also equally common in DLP areas - just find something. We don't know what/where/if it is but we want you to find it. While for some types of identifiers you can distinguish them because they are in a particular format _and_ they have some internal integrity which you can check (like IBAN numbers has control digits), others do not have it and there is either a fat chance of false positives or false negatives, depending on how creative you are with finding - for example - all those possible ways of writing a phone number. And don't even get me started on trying to find names or addresses. Of course, you can try to "use AI" to guess what ad where constitutes sensitive data but this will only add another layer to already excruciating headache. Even a human, having a relatively good understanding of a context, could make mistakes here now and then. So even without getting into the gory technical details of how to implement such stuff with/around Splunk, I'd say if you want to do something like that without proper data classification and well-defined things to filter/mask you're in for a treat - a neverending project of tweaking your detection engine and dealing with stakeholders' complaints about false positives and negatives.
Hi @Richy_s , to mask sensitive data, you can follow the instructions at https://docs.splunk.com/Documentation/Splunk/9.4.0/Data/Anonymizedata The main issue, if I correctly understood, is to ident... See more...
Hi @Richy_s , to mask sensitive data, you can follow the instructions at https://docs.splunk.com/Documentation/Splunk/9.4.0/Data/Anonymizedata The main issue, if I correctly understood, is to identify PII and sensitive information in your data. The best approach, in myexperiesce, is to ingest data in a temporary index (so you can delete it when you will end the analysis) and identify all the data and the regexes to filter them, then you have to apply these filters using the approach in the below link. I don't understand what you mean whan you say "the props need to be dynamic": filter rules must be defined and used, if you have new rules, you have to add them. Ciao. Giuseppe
Hi, I have a requirement to mask any sensitive data, such as credit card numbers or Social Security Numbers, that might be ingested into Splunk. I can write the props to handle data masking, but the... See more...
Hi, I have a requirement to mask any sensitive data, such as credit card numbers or Social Security Numbers, that might be ingested into Splunk. I can write the props to handle data masking, but the challenge is that I do not know where or if the sensitive data will appear. Although the data we currently have doesn't contain any sensitive information, compliance mandates require us to implement controls that detect and mask such data before it is ingested into Splunk. Essentially, the props need to be dynamic. Is there a way to achieve this?   Thanks.
If I create a role test_role which inherits from the can_delete role, it will show in configtracker - I created a test_role inheriting from can_delete. But if I now create a test2_role which inherits... See more...
If I create a role test_role which inherits from the can_delete role, it will show in configtracker - I created a test_role inheriting from can_delete. But if I now create a test2_role which inherits from test_role, the configtracker entry will only show that it inherits from test_role but it won't show the underlying inheritance from can_delete.