All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Shortly you can’t do it. Even you can succeed to remove those “additional” buckets, when splunk recognize that cluster has lost SF or RF it starts to rebuild missed buckets. This will happened again ... See more...
Shortly you can’t do it. Even you can succeed to remove those “additional” buckets, when splunk recognize that cluster has lost SF or RF it starts to rebuild missed buckets. This will happened again and again until retention time of those buckets have fulfilled. And even you could do it some weird way you will lose all support from Splunk side when (I don’t say if) you have any issues with your environment. It’s much better that you configure your storage to avoid that replication or use some other storage instead of it.
Again - depends on what "unused" means here. Just listing defined indexes which hadn't received any data - that should be pretty straightforward indeed - check your defined indexes (it might be diff... See more...
Again - depends on what "unused" means here. Just listing defined indexes which hadn't received any data - that should be pretty straightforward indeed - check your defined indexes (it might be difficult though if you're on distributed setup and don't have the capability of spawning rest to indexers!) and compare it with a summary of your data across all indexes. (be aware of the difference between _time and _indextime). Be aware though that if you have shorter retention periods than what you're searching through, you might not get valid data. But that's it. Depending on what you mean by "unused", the rest of the task can be difficult or even impossible. How is Splunk supposed to know what sourcetypes you might have had defined yesterday and haven't searched for them? Or something like that... And if you have two or more SH(C)s connecting to the same indexer(s)... That might get ugly quickly.
I have seen many struggle with the btool and the some messy output of it. So I made an updated version that makes it far better to use.  Its made as an function, so you can add it to any start up sc... See more...
I have seen many struggle with the btool and the some messy output of it. So I made an updated version that makes it far better to use.  Its made as an function, so you can add it to any start up script on your linux. It make use of color and sort all settings in groups to make it easy to find your settings. In green you see the stanza name. Yellow is each setting for the stanza.  And last in grey is the file that holds the setting.   btools () { # Handle input options if [[ "$1" == "-sd" || "$1" == "-ds" ]]; then opt="etc/system/default" file="$2" stanza="" search="$3" elif [[ "$1" == "-d" && "$2" == "-s" || "$1" == "-s" && "$2" == "-d" ]]; then opt="etc/system/default" file="$3" stanza="" search="$4" elif [[ "$1" == "-d" ]]; then opt="etc/system/default" file="$2" stanza="$3" search="$4" elif [[ "$1" == "-s" ]]; then opt="none" file="$2" stanza="" search="$3" else opt="none" file="$1" stanza="$2" search="$3" fi # If no options are given, show the options [[ -z "$file" ]] && echo -e " btools for Splunk v3.0 Jotne Missing arguments! usage: btools [OPTION] file [STANZA] [SEARCH] -d Do not show splunk default -s All stanza (only needed if search is added and no stanza) file=splunk config file without the .conf [stanza] = complete stanza name or just part of it [search] = search phrase or part of it Example: btools server general servername btools web " && return 1 # If options are not set, give default values [[ -z "$stanza" ]] && stanza=".*" || stanza=".*$stanza.*" [[ -z "$search" ]] && search="" ~/bin/splunk btool $file list --debug | awk -v reset="\033[m\t" \ -v yellow="\033[38;5;226m\t" \ -v green="\033[38;5;46m" ' # set the different ansi color used {sub(/\s+/,"#");split($0,p,"#")} # split the input p[1]=filename p[2]=rest of line p[2]~/^\[.*?\] *$/ {f=0} # if this is a stanza name set flag f=0 f && tolower(p[2])~tolower(search) { # if this is not stanza test if text is part of search or no seach split(p[2],s," ") # Store each stanza in its own group a[st s[1]]++ if(p[1]!~opt)print green st yellow p[2] reset p[1] # Print each block } p[2]~"^\\["stanza"\\]$" {f=1;st=p[2]} # Find next stans ' stanza="$stanza" search="$search" opt="$opt" }   Example: btools server general servername btools web Lets say you like to see all your custom setting in props.conf for the stansa regarding IP 10.36.30.90 and not show any default settings (-q) btools -q props 10.36.30.90   Give me customer setting for index shb_ad btools -q indexes shb_ad Homepath for the shb_ab index: btools -q indexes shb_ad homepath Give me all settings for index shb_ab (includes the default settings) (ps there are more lines than picture shows. btools indexes shb_ad Any suggestion to make it better is welcome
Another proposal, don’t use cli to add inputs. That install those under SPLUNK_HOME/etc/system/local. If/when you are taking DS into use you must manually move/update those node by node. You should ... See more...
Another proposal, don’t use cli to add inputs. That install those under SPLUNK_HOME/etc/system/local. If/when you are taking DS into use you must manually move/update those node by node. You should always use separate apps where you are putting those definitions. That way it’s really easy to update those later on and also add same configuration to other nodes too as every log sources have their own app.
Hi peer-apps are place where MN will deploy those apps. It’s not used on MN. There you should use manager-apps instead of it. There on your lab _cluster is ok for testing, but for any real environme... See more...
Hi peer-apps are place where MN will deploy those apps. It’s not used on MN. There you should use manager-apps instead of it. There on your lab _cluster is ok for testing, but for any real environment you should use separate apps.  Are you sure that your REGEX is correct? Can you give a sample to us from both log files? Use </> as a code block. That way we could be sure that examples are what you have! Also could you add your inputs.conf also so we see what you have defined there?
Hi @robj, thanks for the suggestion! That sounds like a solid option. Do you also have your heavy forwarder deployed in AWS? We ended up using the Splunk Data Manager app to ingest AWS CloudTrail lo... See more...
Hi @robj, thanks for the suggestion! That sounds like a solid option. Do you also have your heavy forwarder deployed in AWS? We ended up using the Splunk Data Manager app to ingest AWS CloudTrail logs from an AWS S3 bucket using a cross-account IAM role that can be assumed by Splunk Cloud. Splunk Data Manager documentation: https://docs.splunk.com/Documentation/DM/1.12.0/User/About Configure AWS for onboarding from a single account: https://docs.splunk.com/Documentation/DM/1.12.0/User/AWSSingleAccount You can use the above implementation to either ingest CloudTrail logs from a single AWS account or from your centralized logging account in an AWS Organization or Control Tower environment.
This is quite often asked question when people want know are there unused indexes etc. you could look those by searching with google. Short answer is you can’t get this kind of list which is 100% acc... See more...
This is quite often asked question when people want know are there unused indexes etc. you could look those by searching with google. Short answer is you can’t get this kind of list which is 100% accurate. There are so many ways how you can access that data and there is no requirement that users must use index name or sourcetype names on queries. Of course you can get some estimates and you can get list of indexes and sourcetypes which are used, but there is no way to get list of unused ones!
Hi @Meett, just wanted to update on this old thread, we ended up using the Splunk Data Manager app to ingest AWS CloudTrail logs from an AWS S3 bucket using a cross-account IAM role that can be assum... See more...
Hi @Meett, just wanted to update on this old thread, we ended up using the Splunk Data Manager app to ingest AWS CloudTrail logs from an AWS S3 bucket using a cross-account IAM role that can be assumed by Splunk Cloud. Splunk Data Manager documentation: https://docs.splunk.com/Documentation/DM/1.12.0/User/About Configure AWS for onboarding from a single account: https://docs.splunk.com/Documentation/DM/1.12.0/User/AWSSingleAccount You can use the above implementation to either ingest CloudTrail logs from a single AWS account or from your centralized logging account in an AWS Organization or Control Tower environment.
Hi @securepoint  Unfortunately I cant get any of the Cortex docs to load for me at the moment, however at a previous customer we used Splunk SC4S to receive a syslog feed from Cortex and then sent t... See more...
Hi @securepoint  Unfortunately I cant get any of the Cortex docs to load for me at the moment, however at a previous customer we used Splunk SC4S to receive a syslog feed from Cortex and then sent this to Splunk over HEC. This was the raw data rather than alerts etc.  Are you able to configure any outputs such as syslog from your Cortex XDR configuration? Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
I'm using Splunk Cloud version 9.3.2408.107. I checked and it is an IoT Input and made sure there is not a space in "test". The add on version is 1.0.1 Thanks for the help.
Hi @Odnaits  Please can you confirm which Splunk version you are on, and if its an XDR or IoT input you are creating? I tried this locally but it worked for me (for both), is there definately no sp... See more...
Hi @Odnaits  Please can you confirm which Splunk version you are on, and if its an XDR or IoT input you are creating? I tried this locally but it worked for me (for both), is there definately no space before or after the word "test"? Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
I am listing the index name using rest query and then checking those index name with audit or internal to to find if how many index used, sourcetype used, and HOW Many index not used in splunk.  A... See more...
I am listing the index name using rest query and then checking those index name with audit or internal to to find if how many index used, sourcetype used, and HOW Many index not used in splunk.  Also i need to identify which indexes and sourcetypes have not received any data for a period exceeding 90 days.
I'm trying to extract endpoint data from Cortex XDR, but I don't want to see just alerts in Splunk—I need all the endpoint data collected by XDR to be replicated in Splunk. Neither Palo Alto nor Splu... See more...
I'm trying to extract endpoint data from Cortex XDR, but I don't want to see just alerts in Splunk—I need all the endpoint data collected by XDR to be replicated in Splunk. Neither Palo Alto nor Splunk support has been able to assist with this. I can't be the first person to ask about it since this is a fundamental requirement—unless it's simply not possible and everyone else already knows that except me. There should be one way calling the APIs through HEC in Splunk, I need to write a script for it, any one tried this approach or any other ? 
1. Main question - how do you define "not used"? 2. While indexes are discrete "bags" for events, sourcetype is just a label. Yes, it bears a significant meaning for Splunk functionalities but you c... See more...
1. Main question - how do you define "not used"? 2. While indexes are discrete "bags" for events, sourcetype is just a label. Yes, it bears a significant meaning for Splunk functionalities but you can even make each event have a separate sourcetype. So why would you want to know what your "unused" sourcetype are?
Greetings,   Every time I try to create a new input on the 1.01 version of Splunk Add On for Palo Alto this error appears, is there a way to solve this? I tried to see if the error was on the o... See more...
Greetings,   Every time I try to create a new input on the 1.01 version of Splunk Add On for Palo Alto this error appears, is there a way to solve this? I tried to see if the error was on the other inputs but it seems that is on the name.    Thanks in advance.
@mohsplunking  Since BeyondTrust Remote Support SaaS is a cloud offering, the integration likely relies on its API capabilities or syslog forwarding features that can be directed to Splunk Cloud.... See more...
@mohsplunking  Since BeyondTrust Remote Support SaaS is a cloud offering, the integration likely relies on its API capabilities or syslog forwarding features that can be directed to Splunk Cloud.   HEC    Splunk Cloud supports HEC, which allows you to send data over HTTPS using a token-based authentication method. If BeyondTrust Remote Support SaaS can send event data (e.g., session logs) to a custom endpoint, HEC could ingest this data directly.   Custom TA for REST API   Check BeyondTrust’s documentation or contact their support to confirm the availability of a REST API for the SaaS version.  Build a Custom TA. Install the “REST API Modular Input” app from Splunkbase (if supported in your Splunk Cloud environment; you may need to request Splunk Support to install it). Configure a REST input with the BeyondTrust API URL, authentication (OAuth or API key), and polling interval (e.g., every 60 seconds). Write props.conf and transforms.conf in the TA to parse the API response (likely JSON) into meaningful fields for Splunk.   Syslog Forwarding with an Intermediary   In the BeyondTrust admin interface, set up syslog forwarding to a server you control (e.g., IP address and port like 514 for UDP or TCP.   Deploy a Splunk Universal Forwarder on a small VM or container. Configure it to listen for syslog data and forward it to Splunk Cloud using outputs.conf.  
interesting approach and thanks for providing it!  I came up with multiple solutions for the deferent scenarios we have.  We have some 9.4.0 right now we are testing and some 9.3.1 which will be upgr... See more...
interesting approach and thanks for providing it!  I came up with multiple solutions for the deferent scenarios we have.  We have some 9.4.0 right now we are testing and some 9.3.1 which will be upgraded to 9.3.2 soon. If there was a single value panel in a row by itself I just updated the CSS to change the width of the entire row vs just the width of the panel by giving the row an id instead of giving the single value panel and id.  #panelid { width: 30% !important;} --this works in 9.3.1 but not in 9.4.1 #rowWithPanel { width: 30% !important;} --this works in 9.4.1 and 9.3.1 If there were two single value panels in a row that we needed to adjust the width of I used the below.  I wrote it two ways so that it would work in all our environments that currently have v9.3.1 and 9.4.0. #panelid1 { width: 15% !important; --this works in 9.3.1 but not 9.4.1 flex-basis: 15% !important; --this works in 9.4.1 but not in 9.3.1 } #panelid2 { width: 25% !important; --this works in 9.3.1 but not 9.4.1 flex-basis: 25% !important; --this works in 9.4.1 but not in 9.3.1 }  
that last one seems to undo the month summarizing  
Correct, I made sure it was not was NOT disabled as a process of elimination in the troubleshooting.   Resolution: Having made sure it was not on the deployer Or in '/opt/splunk/var/run/splunk/de... See more...
Correct, I made sure it was not was NOT disabled as a process of elimination in the troubleshooting.   Resolution: Having made sure it was not on the deployer Or in '/opt/splunk/var/run/splunk/deploy/apps/' I manually deleted the TA folder and undertook a rolling restart on the SHC. This fixed it. Prior to this I had also found WARN in _internal relating to deprecated parameters in limits.conf, planning a change tomorrow to support the updated stanza / autorize params. [auth] enable_install_apps = true I also noted that in the given app under app.conf there was a niche setting: allows_disable = false I'm unclear if this has any impact on deletion (docs don't say).
@mvasquez21  Refer my output:-