All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@gcusello , Its look like a paid course by any chance it there any link of free course ?
@madhav_dholakia - What I'm aware of is Splunk _raw what's coming from the system unless you are explicitly writing config to make changes by props.conf, but otherwise Splunk has no functionality to ... See more...
@madhav_dholakia - What I'm aware of is Splunk _raw what's coming from the system unless you are explicitly writing config to make changes by props.conf, but otherwise Splunk has no functionality to make changes. To me it looks like, the actual value is 17.0, but preview is simplifing to 17 on both system.   I hope this helps!!!
Hi @gcusello Thanks for your support.   Final question How Can I implement this query into alert. Please suggest me.
Hi @AL3Z, this is the training: https://www.splunk.com/en_us/training/course-catalog.html?filters=filterGroup4SplunkEnterpriseSecurity Ciao. Giuseppe
Hi, you can group results by col1  (search) | stats values(VM) values(col2) by col1   ------------ If this was helpful, some karma would be appreciated.
Hi again,   You can backup one collection at a time. And yes, you have to "unshedule" all the report that fill your kvstore collections. That can be a bit long and painful. I only had 2 collection... See more...
Hi again,   You can backup one collection at a time. And yes, you have to "unshedule" all the report that fill your kvstore collections. That can be a bit long and painful. I only had 2 collections so it was not a strech in my case. I had to migrate the kvstore from one SH to another, and I used a backup file. Never tried to sync 2 instances sorry.   I would suggest you try to backup the kvstore in the state you are and then try to restore it on a test SH. If this works => clean both your kvstore SHs and restore there. If not, I'm afraid I would do it all over again : clean it all and then painfully fill again your collections with data. If you still possessed the pertaining data, you can calculate again the collection, at a fast pace for past periods even if it is long. There will definititely be a small service interruption for these collections, but in the end you'll win.   And for later, when you have succeeded to get you kvstore back up and running, I would suggest you add a backup task planned (each day for example).   On another matter : why is your kvstore failing to start ? There should be some insight in `/opt/splunk/var/log/splunk/mongod.log`. Maybe it's just a renew of the certificate for mongo that is needed (probably too easy that one, but who knows...)   Regards, Ema
@Muthu_Vinith - Your question answered here - https://community.splunk.com/t5/Getting-Data-In/Ingest-CSV-file-as-metrics/m-p/671904#M112587   I hope this helps!!!
This was the concept: index-=sith broker sithlord!=darth_maul OR index=jedi domain="jedi.lightside.com" (master!="yoda" AND master!="mace" AND master="Jinn") | where Jname=Sname | table Jname, ... See more...
This was the concept: index-=sith broker sithlord!=darth_maul OR index=jedi domain="jedi.lightside.com" (master!="yoda" AND master!="mace" AND master="Jinn") | where Jname=Sname | table Jname, Sname, strengths, mentor, skill, domain, mission, strength, teacher, actions And index-=sith broker sithlord!=darth_maul OR index=jedi domain="jedi.lightside.com" (master!="yoda" AND master!="mace" AND master="Jinn") | where Jname!=Sname | table Jname, Sname, strengths, mentor, skill, domain, mission, strength, teacher, actions   I am trying to get the results where Jname=Sname are the same plus all following columns. This is for a comparison for our analysts and they want the first two columns to match plus following columns. Then a report where they are not matching.
Thank you for the response. The more I read about the backups, the more worried I get.    Did you backup the entire KV store or just a specific collection? I would like to clean the entire store on... See more...
Thank you for the response. The more I read about the backups, the more worried I get.    Did you backup the entire KV store or just a specific collection? I would like to clean the entire store on two servers.    In the Splunk Docs, for the KV store backup, it says Ensure that the backupRestoreStatus field and the status field are both in the ready state. Our backupRestoreStatus is ready, but our status on each of the Failing servers is starting.  Both of the servers look like that. I think you resync'd the entire cluster. Have you had to resync individual servers? And I do not want to move to any other SHC, these two are staying where they are.    The Docs also mentioned: If you are running any searches that use outputlookup with the default append=f parameter, end them or allow them to complete before taking a backup, or the backup fails   Is there a search that you ran to get all searches that are using outputlookup? I dont believe we have many, but we may.    Im trying to backup everything correctly since I am not entirely sure to what extent the KV Store is affecting all of the searches/reports.    Thank you for your response, wasnt expecting one. I appreciate the help. 
In Splunk IT Service Intelligence (ITSI), you can import entities via CSV to Entity Management using the following steps: Prepare Your CSV File: Make sure your CSV file contains the necessary in... See more...
In Splunk IT Service Intelligence (ITSI), you can import entities via CSV to Entity Management using the following steps: Prepare Your CSV File: Make sure your CSV file contains the necessary information for each entity, such as title, identifier, and any other relevant fields. The CSV file should have a header row with column names. Access Entity Management: Log in to your Splunk instance and navigate to ITSI. In the ITSI main menu, go to Configure > Entity Management. Open Import Wizard: Click on Import Entities. Select CSV File: Click on Browse or similar, depending on your system, to locate and select your CSV file. Map Fields: ITSI will attempt to automatically map fields from your CSV file to ITSI entity fields. Review the mapping to ensure accuracy. If needed, manually map fields by dragging the appropriate CSV columns to the corresponding ITSI fields. Preview and Validate: Preview the entities to ensure they are correctly mapped and that the data looks accurate. Validate the data to check for any errors or missing information. Complete the Import: If everything looks correct, proceed with the import by clicking Finish or a similar button. Splunk ITSI will process the CSV file and import the entities into Entity Management. Verify Import: After the import is complete, verify that the entities are visible in the Entity Management interface. It's important to note that the exact steps and options might vary slightly depending on the version of Splunk ITSI you are using. Always refer to the official Splunk documentation for your specific version for the most accurate and up-to-date information. For more information: https://docs.splunk.com/Documentation/ITSI/latest/Entity/ImportCSV#Steps
@gcusello  I didn't get what is original CS mean is that with index notable or previous search ? can you pls guide me or share me link to get master in ES!
So using this method below I believe will do it.   ``` This join is to pull in an array of all Regions you want to search for in the 'Region' multivalue field ``` ``` There are other way to... See more...
So using this method below I believe will do it.   ``` This join is to pull in an array of all Regions you want to search for in the 'Region' multivalue field ``` ``` There are other way to make the list (hardcoded, macros, lookups) I'm just using a lookup as a POC for if the list is large and is easy to maintain ``` | join type=left [ | inputlookup list_of_regions | stats values(list_of_regions) as list_of_regions | eval list_of_regions_array=mv_to_json_array(list_of_regions) | fields - list_of_regions ] ``` Convert array to multivalue field of all regions to search for ``` | eval list_of_regions=json_array_to_mv(list_of_regions_array) | fields - list_of_regions_array ``` Use the regions multivalue field to build a regex ``` | eval list_of_regions_regex="(?i)(".mvjoin(list_of_regions, "|").")" ``` pipe in the regex build from regions into this eval to loop through multivalue fields ``` | eval Test_loc_method2=case( isnull(Region), null(), mvcount(Region)==1, if(match(Region, $list_of_regions_regex$), replace(Region, ".*".$list_of_regions_regex$.".*", "\1"), null()), mvcount(Region)>1, mvmap(Region, if(match(Region, ".*".$list_of_regions_regex$.".*"), replace(Region, ".*".$list_of_regions_regex$.".*", "\1"), null())) ) | fields - list_of_regions, list_of_regions_regex ``` Pipe in matches that returned into the 'list_of_regions' lookup to pull back a formatted version of the match. Note: This lookup definition must have case sensitivity turned off for this part to work as intended. ``` | lookup list_of_regions list_of_regions as Test_loc_method2 OUTPUT list_of_regions as formatted_matched_region     Full SPL I used to generate this output   | makeresults | fields - _time | eval Region=split("sh Bangalore Test|Chennai|Hyderbad", "|") | append [ | makeresults | fields - _time | eval Region=split("test China 1|India| ", "|") ] | append [ | makeresults | fields - _time | eval Region=split(" |Loc USA 2|London", "|") ] | append [ | makeresults | fields - _time | eval Region=split("lowercased china to test|New York|usa (America)", "|") ] ``` This join is to pull in an array of all Regions you want to search for in the 'Region' multivalue field ``` ``` There are other way to make the list (hardcoded, macros, lookups) I'm just using a lookup as a POC for if the list is large and is easy to maintain ``` | join type=left [ | inputlookup list_of_regions | stats values(list_of_regions) as list_of_regions | eval list_of_regions_array=mv_to_json_array(list_of_regions) | fields - list_of_regions ] ``` Convert array to multivalue field of all regions to search for ``` | eval list_of_regions=json_array_to_mv(list_of_regions_array) | fields - list_of_regions_array ``` Use the regions multivalue field to build a regex ``` | eval list_of_regions_regex="(?i)(".mvjoin(list_of_regions, "|").")" ``` pipe in the regex build from regions into this eval to loop through multivalue fields ``` | eval Test_loc_method2=case( isnull(Region), null(), mvcount(Region)==1, if(match(Region, $list_of_regions_regex$), replace(Region, ".*".$list_of_regions_regex$.".*", "\1"), null()), mvcount(Region)>1, mvmap(Region, if(match(Region, ".*".$list_of_regions_regex$.".*"), replace(Region, ".*".$list_of_regions_regex$.".*", "\1"), null())) ) | fields - list_of_regions, list_of_regions_regex ``` Pipe in matches that returned into the 'list_of_regions' lookup to pull back a formatted version of the match. Note: This lookup definition must have case sensitivity turned off for this part to work as intended. ``` | lookup list_of_regions list_of_regions as Test_loc_method2 OUTPUT list_of_regions as formatted_matched_region     Note: I created a lookup for this example with CSV named "list_of_regions.csv" and with lookup definition named "list_of_regions". On the definition I turned off the case-sensitivity to allow for a formatted region to be returned on the last step if desired. You don't necessarily have to use a lookup for this method to work, I just found that if the list gets large that storing them in lookups sometimes makes things easier to maintain.  If you only really need to use the list of regions for a single search you could probably just have them hardcoded into the search itself (Or just build the hardcoded regex based of your list) I just was sharing how you can sometimes pipe in $token$ into an eval function and it seemed to fit your use-case here. And for reference of what the lookup looks like here is a screenshot of what I used for this.  
HI, Had recently to clean --local and resync without much success due to some difficulties for the kvstore to acknowledge they were in the same cluster... Some conf are very persistent when you mov... See more...
HI, Had recently to clean --local and resync without much success due to some difficulties for the kvstore to acknowledge they were in the same cluster... Some conf are very persistent when you move a SH from a SHC to another SHC. My mistake on that.   In the end : we - backup the correct kvstore - clean kvstore on all instances - restore kvstore and resync => good as new !   Ema
Hi @aaronbarry73 , let me know if I can help you more, or, please, accept one answer for the other people of Community. Please, when you'll have the answer from Splunk Support, please share it for ... See more...
Hi @aaronbarry73 , let me know if I can help you more, or, please, accept one answer for the other people of Community. Please, when you'll have the answer from Splunk Support, please share it for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Thanks, @gcusello , it looks like this might be the intended behavior, but seems odd that the captain would run all dbx jobs (unless the jobs are being delegated but the captain is doing all of the l... See more...
Thanks, @gcusello , it looks like this might be the intended behavior, but seems odd that the captain would run all dbx jobs (unless the jobs are being delegated but the captain is doing all of the logging?) so I think I will open a case to get confirmation on expected behavior. We have always used dedicated Heavy Forwarders, but I figured it would be nice to maintain all identities, connections and inputs in one place!  We'll see what Splunk says and I'll keep digging!
I found the jar trick stable from versions 4.5 onwards, so I'm posting what I'm using:  APPD_PATH=$(ls -d /opt/appdynamics 2>/dev/null || ls -d /appl/appdynamics/saas 2>/dev/null || ls -d /appl... See more...
I found the jar trick stable from versions 4.5 onwards, so I'm posting what I'm using:  APPD_PATH=$(ls -d /opt/appdynamics 2>/dev/null || ls -d /appl/appdynamics/saas 2>/dev/null || ls -d /appl/appdynamics 2>/dev/null) if [ -z "$APPD_PATH" ];then echo "AppDynamics is not installed on this server" else echo "DEBUG: ls -d $APPD_PATH/*" ls -d $APPD_PATH/* |grep agent | while read adir do agent=$(basename $(basename "$adir" "agent") "-" ) echo "DEBUG: agent=$agent" unzip -p $adir/${agent}*agent.jar META-INF/MANIFEST.MF \ |sed -e 's/Implementation-Version: //;t;d' find $adir/ -type f -name java -executable |while read javabin do echo -e "# $javabin" $javabin -version 2>&1 done done fi
We are in the process of implementing SAML configuration in Splunk, utilizing an external .pem certificate. However, Splunk does not accept this certificate. How can we obtain an external certificate... See more...
We are in the process of implementing SAML configuration in Splunk, utilizing an external .pem certificate. However, Splunk does not accept this certificate. How can we obtain an external certificate in Splunk to successfully configure SAML? Additionally, for SAML integration, we are utilizing NetIQ Access Manager.
Hi @AL3Z, if you have results to your search, it should be sure that you have duplicated events. You can analyze your data to undertand where these duplicates come from and if there's the possibili... See more...
Hi @AL3Z, if you have results to your search, it should be sure that you have duplicated events. You can analyze your data to undertand where these duplicates come from and if there's the possibility of duplication. Ciao. Giuseppe
Hi @simon_b, let me understand: you created some index-time fields from a json and you want to use aliases. But the fields from a json aren't by default created ad index time, so, if you are creati... See more...
Hi @simon_b, let me understand: you created some index-time fields from a json and you want to use aliases. But the fields from a json aren't by default created ad index time, so, if you are creatingindex-time fields, you can create them using the names you like. Anyway, are you sure that a search on index-time fields (with tstats) doesn't run with aliases? I haven't index-time json extractions to test, but they shuld run with aliases. Are you sure thta you extracted json fields at index-time? Ciao. Giuseppe
Over 3 years later, and i am wondering the same thing. I have two individual SH that have a faulty KV store and I was wanting to see the impact of running the  splunk clean kvstore --local command. ... See more...
Over 3 years later, and i am wondering the same thing. I have two individual SH that have a faulty KV store and I was wanting to see the impact of running the  splunk clean kvstore --local command.    Hope we get an answer within the next 3 years