All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Is it possible to use a lookup file in the Noteble Event supression say to look up a list of assets/enviroments that we do/don't want to know about?  
Recently I have encountered an issue while rebuilding data on one of our indexers. During this process I needed to execute the following command: /opt/splunk/bin/splunk _internal call /data/indexes/... See more...
Recently I have encountered an issue while rebuilding data on one of our indexers. During this process I needed to execute the following command: /opt/splunk/bin/splunk _internal call /data/indexes/main/rebuild-metadata-and manifests However upon running, I was prompted for Splunk Username and Password. Typically we used the credentials created at Web GUI. But since the usually the indexers Web GUI is set to false most of the time, so there is no GUI username and password available on them. I tried using my Search Head Username and Password, followed by the OS Username and Password, but neither worked. After some research, I discovered that every Splunk instance includes a default admin user created during installation: Username: admin Password: changeme but it doesn't work for me. Here is the procedure that finally worked for me, so to reset the password for the admin user Access the indexers cli, here the passwd file exist in: /opt/splunk/etc/ Rename that file to passwd.bak Create a new file with the name: user-seed.conf, in location: /opt/splunk/etc/system/local/ In this file use the below configuration: [user_info] USERNAME = admin PASSWORD = <password of your choice> Restart the Splunk service on that indexer using /opt/splunk/bin/splunk restart This will generate a new passwd file. You can now use the admin user with the password you set in step 3.   After the resting the password, I've used the initial command, using the updated admin credentials and it worked.
Team, wanted to convert below time into epoc time. Please help. time - Nov 16 10:00:57 2024
i am trying to upload json file using UI in Splunk cloud and applying settings for parsing as below but data is coming as a single event  [custom_json_sourcetype] INDEXED_EXTRACTIONS = json SHOULD... See more...
i am trying to upload json file using UI in Splunk cloud and applying settings for parsing as below but data is coming as a single event  [custom_json_sourcetype] INDEXED_EXTRACTIONS = json SHOULD_LINEMERGE = false KV_MODE = json LINE_BREAKER = },\s*{ please advise correct settings to apply under sourcetypes in web when uploading here is the data:   {     "sourcetype": "testoracle_sourcetype",     "data": {         "cdb_tbs_check": [             {                 "check_error": "",                 "check_name": "cdb_tbs_check",                 "check_status": "OK",                 "current_use_mb": "1355",                 "percent_used": "2",                 "tablespace_name": "SYSTEM",                 "total_physical_all_mb": "65536"             },             {                 "check_error": "",                 "check_name": "cdb_tbs_check",                 "check_status": "OK",                 "current_use_mb": "23596",                 "percent_used": "36",                 "tablespace_name": "SYSAUX",                 "total_physical_all_mb": "65536"             },             {                 "check_error": "",                 "check_name": "cdb_tbs_check",                 "check_status": "OK",                 "current_use_mb": "29",                 "percent_used": "0",                 "tablespace_name": "UNDOTBS1",                 "total_physical_all_mb": "65536"             },            
  Hi , Please check above two screenshot , i want to join these queries in such way where i will get AppID along with coluns in first search query  requirement is appid should come against ... See more...
  Hi , Please check above two screenshot , i want to join these queries in such way where i will get AppID along with coluns in first search query  requirement is appid should come against order id from from first screen shot   pls suggest . . 
hi is anybody could give me a search to calculate the _indextime average for my events once it's done, what i have to do in the cron parameters of my alert to take into account this metric? thanks
Hello everyone, im new in Splunk and still need a lot to know. I want to ask question, how to forward data in JSON format from Netscout to Splunk? Should i use Univ Forwarder or maybe App on SplunkB... See more...
Hello everyone, im new in Splunk and still need a lot to know. I want to ask question, how to forward data in JSON format from Netscout to Splunk? Should i use Univ Forwarder or maybe App on SplunkBase? Thanks for the attention #Netscout #JSON
Hello everyone I want help on how to deal with the following problem A company that got hacked and we want to know how the hack happened and is there a data leak or not The company does not use an... See more...
Hello everyone I want help on how to deal with the following problem A company that got hacked and we want to know how the hack happened and is there a data leak or not The company does not use any of the EDR and sime and ndr systems Question The best way to extract logs from the company's systems and analyze them in splunk and what are the rules to start searching
What would be the storage requirement for SmartStore when rf is 2 for indexer cluster. Would it be double that of traditional for one indexer or only primary buckets would move to smartstore? Antyhi... See more...
What would be the storage requirement for SmartStore when rf is 2 for indexer cluster. Would it be double that of traditional for one indexer or only primary buckets would move to smartstore? Antyhing specifically mentioned in Splunk docs? (I have tried to find but did not see anything) Consider following scenarios: 1) 2 onprem Indexers with dedicated storage, rf is 2. Each indexer has 5 TB of data. So combined would be 10 TB 2) 4 indexers - 2 sites, 2 on each site. Each site will maintain 1 copy of bucket. Again combined storage would be 10 TB When migrating to SmartStore, what would be the expected storage utilization?
I changed the metrics above and then couldn't save and display 500 error
HI Splunk Answers,  Is there a way to get license count by Cluster Peer? Example if I have 3 splunk cluster, I need to get the license by cluster (location) and by index (sourcetype if possible). In... See more...
HI Splunk Answers,  Is there a way to get license count by Cluster Peer? Example if I have 3 splunk cluster, I need to get the license by cluster (location) and by index (sourcetype if possible). Internal logs doesn't include the indexer based on host (h). I'm thinking of different SPL query but no idea where I can get it. Need your help. thanks! 07-16-2024 21:14:52.451 -0500 INFO LicenseUsage - type=Usage s="test:source::/opt/splunk/var/log/test_2024-07-16.log" st="test-st" h=hosttest o="" idx="testidx" i="sadsadasdasdadasdasdasdasdasda" pool="auto_generated_pool_enterprise" b=503 poolsz=1234567891012
Hi All, I am working on skipped searches, what is the difference between below 2? 1) The maximum number of concurrent historical scheduled searches on this cluster has been reached 2) The maximum ... See more...
Hi All, I am working on skipped searches, what is the difference between below 2? 1) The maximum number of concurrent historical scheduled searches on this cluster has been reached 2) The maximum number of concurrent running jobs for this historical scheduled search on this cluster has been reached  
I'm trying to distribute an app from the deployment server to the index server via the cluster manager. In the cluster manager's deploymentclient.conf, it uses serverRepositoryLocationPolicy and re... See more...
I'm trying to distribute an app from the deployment server to the index server via the cluster manager. In the cluster manager's deploymentclient.conf, it uses serverRepositoryLocationPolicy and repositoryLocation to receive the app in $SPLUNK_HOME$/etc/manager-apps and pushes it to peer-apps on the index server for distribution. Distribution to the index server was successful, but an install error message appears in the deployment server's internal log. Is there a setting to prevent items distributed to manager-apps from being installed?
We are in the process of data onboarding. We managed to deploy a distributed architecture in which we have 3 indexers, 3 search, mastercluster, deployer, deployment, and 2 intermediate forwarders.... See more...
We are in the process of data onboarding. We managed to deploy a distributed architecture in which we have 3 indexers, 3 search, mastercluster, deployer, deployment, and 2 intermediate forwarders. On my syslog server, I receive logs from the firewall through syslog port 10514 and I managed to install a forwarder into my syslog server connected to my deployment server.  and on my forwarder configuration file, I connect to all 2 intermediate forwarders Now help me to finish this task, how can I manage to see the firewall logs in my Splunk? What do you think I should edit into my syslog server? Please remember I don't write the syslog logs(firewall) into a file. Its onstream logs My forwarder inputs.conf file| [udp://514] connection_host = ip index = tcra_firewall_idx sourcetype = tcra:syslog:log
Hello, I'm struggling mightily with this one. I have two dates in the same event, both are strings.  Their format is below. I would like to evaluate the number of days between the firstSeen and lastS... See more...
Hello, I'm struggling mightily with this one. I have two dates in the same event, both are strings.  Their format is below. I would like to evaluate the number of days between the firstSeen and lastSeen dates. I would also like to evaluate the number of days since firstSeen and when the search is performed. Any help would be much appreciated...    firstSeen: Aug 27, 2022 20:18:37 UTC lastSeen: Jun 23, 2024 06:17:25 UTC
Hello Is it possible to monitor remote API calls out of the box with Splunk Observability cloud.  My application is running on an IIS server and is .NET. I have 3 critical API calls   1. Callin... See more...
Hello Is it possible to monitor remote API calls out of the box with Splunk Observability cloud.  My application is running on an IIS server and is .NET. I have 3 critical API calls   1. Calling a external third party service (that i cannot get splunk on for that reason= 2. Is calling a Azure Function that is not connected to splunk 3. Is calling another ASP Core application that is currently NOT monitored by splunk.    Can I when I call from my main application those 3 services get a overview that they are being called out of the box?
I want to get the below search executed and display the results in a table for all comma separated values that gets passed from dropdown. index="xxx" source = "yyyyzzz" AND $DropdownValue$ AND Inpu... See more...
I want to get the below search executed and display the results in a table for all comma separated values that gets passed from dropdown. index="xxx" source = "yyyyzzz" AND $DropdownValue$ AND Input| eventstats max(_time) as maxTimestamp by desc| head 1 | dedup _time | eval lastTriggered = strftime(_time, "%d/%m/%Y %H:%M:%S %Z")| stats values(lastTriggered) as lastTriggeredTime| appendcols [search index="xxx" source = "yyyyzzz" sourcetype = "mule:rtf:per:logs" AND $DropdownValue$ AND Output| eventstats max(_time) as maxTimestamp by desc| head 1 | dedup_time | eval lastProcessed = strftime(_time, "%d/%m/%Y %H:%M:%S %Z")| stats values(lastProcessed) as lastProcessedTime] | appendcols [search index="xxx" source = "yyyyzzz" sourcetype = "mule:rtf:per:logs" AND $DropdownValue$ AND Error| eventstats max(_time) as maxTimestamp by desc| head 1 | dedup_time | eval lastErrored = strftime(_time, "%d/%m/%Y %H:%M:%S %Z")]|eval "COMPONENT ID"="$DropdownValue$"|eval "Last Triggered Time"=lastTriggeredTime |eval "Last Processed Time"=lastProcessedTime| eval "Last Errored Time"=lastErrored | table "COMPONENT ID", "Last Triggered Time", "Last Processed Time","Last Errored Time" | fillnull value="NOT IN LAST 12 HOURS" "COMPONENT ID","Last Triggered Time", "Last Processed Time","Last Errored Time"   For example if $dropdownValue$ is having ABC,DEV, then the entire above mentioned search should get executed twice and 2 rows od data should be displayed in the table. Can someone guide how this can be achieved?    
I am getting this error "Login failed due to client tls version being less than minimal tls version allowed by the server " when editing the connection. From the splunk community, I have already adde... See more...
I am getting this error "Login failed due to client tls version being less than minimal tls version allowed by the server " when editing the connection. From the splunk community, I have already added some solutions to my configuration, using db connect setup page to set tls version with the parameter:   -Dhttps.protocols=TLSv1,TLSv1.1,TLSv1.2 Also, Adding sslVersions = tls1.2 under [sslconfig] None of the above worked! Kindly suggest me is there anything i need to check from my end or give any solution for this error. Splunk DB Connect #tlsversion
Hi All, It would be great help if anyone help me figure out this. App is deployed in the UFs to receive such logs in splunk under the index wineventlog. I can see 2 different sourcetypes (xmlwineve... See more...
Hi All, It would be great help if anyone help me figure out this. App is deployed in the UFs to receive such logs in splunk under the index wineventlog. I can see 2 different sourcetypes (xmlwineventlog, XmlWinEventLog) under the wineventlog index sourcetype : XmlWinEventLog (source : "XmlWinEventLog:Application", "XmlWinEventLog:Security", "XmlWinEventLog:System") sourcetype : xmlwineventlog (source : "WinEventLog:Microsoft-Windows-Sysmon/Operational", "WinEventLog:Microsoft-Windows-Windows Defender/Operational") Please help me where should I need to check these exact difference of two distinct case sensitive sourcetypes. Thanks