All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi, Quoting the docs: If the cluster is not in a valid state and the local site does not have a full complement of primaries (typically, because some peers on the site are down), remote peers also ... See more...
Hi, Quoting the docs: If the cluster is not in a valid state and the local site does not have a full complement of primaries (typically, because some peers on the site are down), remote peers also participate in the search, providing results from any primaries missing from peers local to the site. Looking at your diagram and search, am I correct in thinking that index=site01_* is only configured on site01 and index=site02_* is only configured on site02? If so, firstly this is a misconfiguration and bad practice. However, it makes sense that your search affinity is not working because you would not have a copy of the data in both sites. Only site02 would have data in site02_* and therefore index=site0* would return data from both sites! You should be managing your indexes via the cluster manager so that it's consistent. If you want different indexes per site, then you should be using a multi-cluster deployment. If you want to restrict access between sites and search heads then you can use RBAC and search filters. Search affinity is not designed to be a security control and should not be treated as such.
@Tom_Lundie what about the syslog configuration? what should I do with it?
Thank you so much for your help. Am new to Splunk and I want really bad to master it. I will go and check the config as you said and I will let you know. 
Hi All, I am working on skipped searches, what is the difference between below 2? 1) The maximum number of concurrent historical scheduled searches on this cluster has been reached 2) The maximum ... See more...
Hi All, I am working on skipped searches, what is the difference between below 2? 1) The maximum number of concurrent historical scheduled searches on this cluster has been reached 2) The maximum number of concurrent running jobs for this historical scheduled search on this cluster has been reached  
Hi, It sounds like you've made great progress, nice one. There are multiple designs and opinions out there regarding getting syslog into Splunk. It's up to you to decide what's best. To get you st... See more...
Hi, It sounds like you've made great progress, nice one. There are multiple designs and opinions out there regarding getting syslog into Splunk. It's up to you to decide what's best. To get you started there are tools such as Splunk Connect For Syslog which provides an "all in one" feel, you can also use a syslog service such as rsyslog or syslog-ng to listen for your logs and cache them to disk and then forward them via a monitor stanza in inputs.conf. However, if you want Splunk to listen directly, here is an example inputs.conf that you can tweak for your deployment:   [udp://10514] disabled = false connection_host = ip sourcetype = <<firewall_product>> index = main   For sourcetype, look on Splunkbase for your firewall vendor to check if there is an appropriate TA that you can use for field extractions. For example palo-alto firewall would be pan_log.  For index, pick an appropriate index to suit your needs. Finally, inputs.conf can either be deployed within an app (recommended) or directly under /opt/splunk/etc/system/local/ Also, make sure that 10514 is permitted on the local firewall.
I'm trying to distribute an app from the deployment server to the index server via the cluster manager. In the cluster manager's deploymentclient.conf, it uses serverRepositoryLocationPolicy and re... See more...
I'm trying to distribute an app from the deployment server to the index server via the cluster manager. In the cluster manager's deploymentclient.conf, it uses serverRepositoryLocationPolicy and repositoryLocation to receive the app in $SPLUNK_HOME$/etc/manager-apps and pushes it to peer-apps on the index server for distribution. Distribution to the index server was successful, but an install error message appears in the deployment server's internal log. Is there a setting to prevent items distributed to manager-apps from being installed?
Outstanding. That worked perfectly. Thank you. 
What type of DB are you trying to connect to? Can you share your connection string and other configuration (redacted as appropriate) please?
We are in the process of data onboarding. We managed to deploy a distributed architecture in which we have 3 indexers, 3 search, mastercluster, deployer, deployment, and 2 intermediate forwarders.... See more...
We are in the process of data onboarding. We managed to deploy a distributed architecture in which we have 3 indexers, 3 search, mastercluster, deployer, deployment, and 2 intermediate forwarders. On my syslog server, I receive logs from the firewall through syslog port 10514 and I managed to install a forwarder into my syslog server connected to my deployment server.  and on my forwarder configuration file, I connect to all 2 intermediate forwarders Now help me to finish this task, how can I manage to see the firewall logs in my Splunk? What do you think I should edit into my syslog server? Please remember I don't write the syslog logs(firewall) into a file. Its onstream logs My forwarder inputs.conf file| [udp://514] connection_host = ip index = tcra_firewall_idx sourcetype = tcra:syslog:log
Hi, You can do that with an eval command. | eval firstSeenTS = strptime(firstSeen, "%b %d, %Y %H:%M:%S %Z"), lastSeenTS = strptime(lastSeen, "%b %d, %Y %H:%M:%S %Z"), firstLastDiff = (lastSeenTS - ... See more...
Hi, You can do that with an eval command. | eval firstSeenTS = strptime(firstSeen, "%b %d, %Y %H:%M:%S %Z"), lastSeenTS = strptime(lastSeen, "%b %d, %Y %H:%M:%S %Z"), firstLastDiff = (lastSeenTS - firstSeenTS)/86400, firstNowDiff = (now() - firstSeenTS)/86400 If you want to round your days down to whole numbers you can use floor()
Hi CW, Assuming you haven't made any modifications to the Palo Alto TA (and subsequent sourcetypes). There is no reason why Splunk would be dropping the URL log_subtype. Check for a filter on the P... See more...
Hi CW, Assuming you haven't made any modifications to the Palo Alto TA (and subsequent sourcetypes). There is no reason why Splunk would be dropping the URL log_subtype. Check for a filter on the Palo Alto logging policy to not include the URL subtype? Otherwise can you confirm the PanOS version and TA version that you have deployed as there has been some issues with this sourcetype before. Cheers!  
Hello, I'm struggling mightily with this one. I have two dates in the same event, both are strings.  Their format is below. I would like to evaluate the number of days between the firstSeen and lastS... See more...
Hello, I'm struggling mightily with this one. I have two dates in the same event, both are strings.  Their format is below. I would like to evaluate the number of days between the firstSeen and lastSeen dates. I would also like to evaluate the number of days since firstSeen and when the search is performed. Any help would be much appreciated...    firstSeen: Aug 27, 2022 20:18:37 UTC lastSeen: Jun 23, 2024 06:17:25 UTC
site_replication_factor = origin:3,site1:3,total:6 site_search_factor = origin:2,total:2
A search effector exists on each site. How do I ensure that only data from the site the SH belongs to is retrieved?  
Hi, There could be a few things going on here. I would hazard a guess that you're running Splunk as a non-root user trying to bind to port 443 which is a privileged-port. Check if Splunk is liste... See more...
Hi, There could be a few things going on here. I would hazard a guess that you're running Splunk as a non-root user trying to bind to port 443 which is a privileged-port. Check if Splunk is listening on 443/TCP:     ss -tlp     Also, check for web related messages under: $SPLUNK_HOME/bin/splunk status And UiHttpListener entries in /opt/splunk/var/log/splunk/splunkd.log. If this is indeed your problem, I suggest that you use different port such as 8443. You could also try to allow Splunk to bind to the system ports, you will need to research how to do this securely for your environment. If Splunk is listening on 443, start working your way out: Are there any error entries in splunkd.log. Can you curl it via local host   curl -kv https://localhost​   Are you dropping connections on the local FW?   firewall-cmd --list-all   Is there a routing issue / network firewall between your client browser and Splunk instance? Is there an issue with client machine / browser? Feel free to upload any relevant Splunkd.log entries, redacted appropriately to help troubleshooting.      
Many thanks for the help.  I want to expand the requirement as follows: For  an "id" there could be  upto 12 max possible different events with response.action.type="UserCreated" or response.action.... See more...
Many thanks for the help.  I want to expand the requirement as follows: For  an "id" there could be  upto 12 max possible different events with response.action.type="UserCreated" or response.action.type="TxCreated"  or response.action.type="TxUpdated" and 9 other types. The goal is to group by "id" where only 2 action types have occured namely:            response.action.type="UserCreated" (Event1) and            response.action.type="TxCreated"  (Event 2)   Event Type 1 data= {       "response": {                "action": {                     "type": "UserCreated",                }        "resources":[             {                "type": "loginUser",                "id": "1234"            }         ]  } }   Event Type 2 data= {        "response": {               "action": {                    "type": "TxCreated",                }                "actors":                {                       "type": "loginUser",                      "id": "1234"               }         } }   Event Type 3 data= {        "response": {               "action": {                    "type": "TxUpdated",                }                "actors":                {                       "type": "loginUser",                      "id": "1234"               }         } }    
ah okay , thanks for the confirmation. So what they meant is  for Indexes that are still on the local non S3 storage & not those indexes converted to smartstore i.e. moved to obj store.  "You can st... See more...
ah okay , thanks for the confirmation. So what they meant is  for Indexes that are still on the local non S3 storage & not those indexes converted to smartstore i.e. moved to obj store.  "You can still search any existing buckets that were tsidx-reduced before migration to SmartStore." Which means all the reduced buckets will need rebuilt to full before updating indexes.conf config to move buckets to Smartstore. https://docs.splunk.com/Documentation/Splunk/latest/Indexer/Reducetsidxdiskusage#Restore_reduced_buckets_to_their_original_state    
Hello Is it possible to monitor remote API calls out of the box with Splunk Observability cloud.  My application is running on an IIS server and is .NET. I have 3 critical API calls   1. Callin... See more...
Hello Is it possible to monitor remote API calls out of the box with Splunk Observability cloud.  My application is running on an IIS server and is .NET. I have 3 critical API calls   1. Calling a external third party service (that i cannot get splunk on for that reason= 2. Is calling a Azure Function that is not connected to splunk 3. Is calling another ASP Core application that is currently NOT monitored by splunk.    Can I when I call from my main application those 3 services get a overview that they are being called out of the box?
serial_number would have already been extracted, too.  You do whatever is needed.  But I do not see a chart of two values() function useful in this case.  Maybe you mean to have something like _t... See more...
serial_number would have already been extracted, too.  You do whatever is needed.  But I do not see a chart of two values() function useful in this case.  Maybe you mean to have something like _time E21 E25 2024-07-15 51A81FC 51A86FC   2024-07-16   51A81FC In other words, get serial_numbers according to error_code?  All you need is something like   <your search> "ErrorCode(*)" | rex field=message "ErrorCode\((?<error_code>[^\)]+)" | timechart span=1d values(serial_number) by error_code   Here, I propose that you restrict events to those containing error code in index search rather than in another search line. Or, if you want to group error_codes on individual serial_number, like _time 51A81FC 51A86FC 2024-07-15 E21 E21 2024-07-16 E25   For this, do   <your search> "ErrorCode(*)" | rex field=message "ErrorCode\((?<error_code>[^\)]+)" | timechart span=1d values(error_code) by serial_number   Does this make sense? Here is an emulation to get the above results.  Play with it and compare with real data   | makeresults | eval data = mvappend("{\"time\": \"2024-07-15\", \"message\":\"gimlet::hardware_controller: State { target: Idle, state: Idle, cavity: 42400, fuel: 0, shutdown: None, errors: ErrorCode(E21)}\", \"serial_number\": \"51A86FC\"}", "{\"time\": \"2024-07-15\", \"message\":\"gimlet::hardware_controller: State { target: Idle, state: Idle, cavity: 42400, fuel: 0, shutdown: None, errors: ErrorCode(E21)}\", \"serial_number\": \"51A81FC\"}", "{\"time\": \"2024-07-16\", \"message\":\"gimlet::someotherstuff: State { target: whatever, state: whaever, some other messages, errors: ErrorCode(E25)}\", \"serial_number\": \"51A81FC\"}") | mvexpand data | rename data as _raw | spath | eval _time = strptime(time, "%F") ``` the above emulates <your search> "ErrorCode(*)" ```