All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I ran the following search in Search & Reporting: index=_internal (ERROR OR WARN) "jira". This returned 3 significant errors related to the API token. I’m unsure where I can find the file that conta... See more...
I ran the following search in Search & Reporting: index=_internal (ERROR OR WARN) "jira". This returned 3 significant errors related to the API token. I’m unsure where I can find the file that contains the API token for Jira in Splunk. i am working in splunk cloud environnement. I’ve attached screenshots of the errors for reference. Any help would be greatly appreciated! Thank you in advance.  
@livehybrid , it is in /opt/sc4s/local/context folder. Thanks
I found some fixes added in version 3.35.1 related to Citrix.  https://github.com/splunk/splunk-connect-for-syslog/releases?page=1 So I decided to update to the latest version 3.36.0. Unfortunately... See more...
I found some fixes added in version 3.35.1 related to Citrix.  https://github.com/splunk/splunk-connect-for-syslog/releases?page=1 So I decided to update to the latest version 3.36.0. Unfortunately, the issue remains.
Hi @u_m1580  I would look at using a tstats search as this will be more performant when there is data in the index. Something simple such as: |tstats count where index=<yourIndex> Then you can al... See more...
Hi @u_m1580  I would look at using a tstats search as this will be more performant when there is data in the index. Something simple such as: |tstats count where index=<yourIndex> Then you can alert where count=0, or add a | where count>0 and alert when there are no results.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
If all these things were documented, there wouldn't be a need for Answers, Splunk Trust and .conf! 
Huh, thanks, that works. This should be changed or at least clarified in the documentation
timewrap is looking for a hidden field called _span This should work | tstats count where index=my_index by _time span=1h | eval _span=3600 | timewrap 1w  This should give a warning index=my_inde... See more...
timewrap is looking for a hidden field called _span This should work | tstats count where index=my_index by _time span=1h | eval _span=3600 | timewrap 1w  This should give a warning index=my_index | timechart span=1h count | fields - _span | timewrap 1w
If you use timewrap without previously using the timechart command, you get a warning "The timewrap command is designed to work on the output of timechart. ". If the format is correct, it works thou... See more...
If you use timewrap without previously using the timechart command, you get a warning "The timewrap command is designed to work on the output of timechart. ". If the format is correct, it works though. For example, these two queries give the same output: | tstats count where index=my_index by _time span=1h | timewrap 1w index=my_index | timechart span=1h count | timewrap 1w  The first query is way faster in this case, but I get the warning mentioned above. (this is not about the tstats command, it is also possible to recreate timechart it with other commands iirc) The docs say: "You must use the timechart command in the search before you use the timewrap command. " (both SPL and SPL2 docs say this) Why is this the case though? Beside the docs and the warning, nothing hints towards this being correct, it works... Am I missing something? If not, is it possible to deactivate the warning?  
Hi, I am running splunk standalone 8.4.1 with Citrix add-on installed 8.2.3.  Also, I have SC4S running version 3.31.0. I configured Citrix to send syslog events to SC4S, and running a tcpdump in S... See more...
Hi, I am running splunk standalone 8.4.1 with Citrix add-on installed 8.2.3.  Also, I have SC4S running version 3.31.0. I configured Citrix to send syslog events to SC4S, and running a tcpdump in SC4S, I see those events arriving. According to the documentation, nothing else must be done at SC4S level. https://splunk.github.io/splunk-connect-for-syslog/3.31.0/sources/vendor/Citrix/netscaler/ Unfortunately, I don't see any Citrix event in splunk. I searched in index "netfw" and also filtered by sorcetype (sourcetype="citrix*" and index=*), in both cases, no events are in there. Other events, from our firewall, are reaching splunk without any issue via the same SC4S server. So I discarded network issues. Any idea about what could be happening? any SC4S logs that I could check? thanks a lot.
All, just need some advice. We have a customer that we are migrating across different cloud providers. Their current Splunk cluster is running on Ubuntu 20.04 (which goes end of life 31st of May). We... See more...
All, just need some advice. We have a customer that we are migrating across different cloud providers. Their current Splunk cluster is running on Ubuntu 20.04 (which goes end of life 31st of May). We want to add new nodes in the new cloud provider running on RHEL 9.x and extend the existing Splunk cluster. So for a shortwhile we will have a mix of Ubuntu and RHEL nodes running together in the same cluster. Splunk have said this is doable but not something they can guarantee as they are not responsible for the OS running on the cluster nodes. Below is the below migration plan we are proposing to the customer, once we get approval we will deploy a PoC to test the migration approach: 1. Ensure there is low latency and sufficient bandwidth between the two cloud providers 2. Deploy new RHEL nodes in new Cloud provider with the same version of Splunk 3. Add the RHEL nodes to the existing Ubuntu cluster and let the cluster synchronize the data to the new RHEL nodes 4. Following successful data synchronization and testing, the master cluster role will be transferred to one of the RHEL nodes 5. Finally, after a period of co-existence and validation, the existing Ubuntu nodes will be removed from the cluster (indexers and Search heads).   Any help or guidance appreciated.
Hi All. Using Splunk for collecting logs from different devices.  But logs from on  devices on the network , is not present on the splunk server. After some hours, the logs from that device is appea... See more...
Hi All. Using Splunk for collecting logs from different devices.  But logs from on  devices on the network , is not present on the splunk server. After some hours, the logs from that device is appearing on the Splunk server again.  In that period, where we missed logs from this device, there has not been any network changes, og changes on the client. We are looking for reason for this. The logs were missing for around 6 hours. from early in the morning.  Could it it be some memory issues on the server, or something with the index`es ? If there was some work for preparing for some kind of maintanence on the backen, could this have any effect on the Splunk server log preformance ? Device which we are missing logs from these hours, has been online all the time. Any tips, how and where to look/ troublshoote in the Splunk enviroment when logs are not present from on or more hosts ? Thanks in advance. DD  
Hi @u_m1580 , you could create an alert using a simple search index=<your_index>  that triggers if you have no events. If you need to check a list of hosts, it's different. Ciao. Giuseppe
Hi there, I would like to create a search to alert us based on an index not ingesting any event data by basing it off any field in our index
Hi @livehybrid ,   First API worked successfully and the User list has been extracted. But fetching the ID from the user list and passing it to the next API throws error.   Need some input on the... See more...
Hi @livehybrid ,   First API worked successfully and the User list has been extracted. But fetching the ID from the user list and passing it to the next API throws error.   Need some input on the python script. Getting "TypeError: string indices must be integers" error , when checked the response received in in DICT format.    for user in users:     user_id = user['name']     roles_response = requests.get(f'{controller_url}/controller/api/rbac/v1/users/{user_id}' , headers = headers)     roles = roles_response     print(f"User: {user['name']}, Roles: {roles}")    
HI @tech_g706  Im not sure why data from last week would be going to archive, this might be something you need to speak to Splunk support about, however regarding the retrieval of the archive data t... See more...
HI @tech_g706  Im not sure why data from last week would be going to archive, this might be something you need to speak to Splunk support about, however regarding the retrieval of the archive data then if this is in DDAA then you can temporarily restore it but I dont think its possible to keep it restored. I would speak to Splunk Support - if there has been an issue which has caused the data to archive prematurely then they may be able to have it restored for you, but it isnt something that you'd be able to do yourself. If you update the Max size via the GUI it will replicate the configuration to the relevant components within the Splunk Cloud stack - you do not need to worry so much about how/where it goes  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hey everyone, I have a question on Splunk Cloud Index MaxSize. I am having an issue with Splunk Cloud Index MaxSize. My index max size is set to 500GB, but the current size has reached 530GB, and... See more...
Hey everyone, I have a question on Splunk Cloud Index MaxSize. I am having an issue with Splunk Cloud Index MaxSize. My index max size is set to 500GB, but the current size has reached 530GB, and some latest events (from last week) are not in the index but are going to archive storage. We have 3 months of searchable retention and 3 months of archive, and the archive dashboard is showing the latest event from last week. We have 8 indexers, which are clustered, and two dedicated search heads (not clustered). My question is, can I update the index maxsize (to unlimited) on the GUI, and will it replicate to all the indexers and 2 search heads, or should I open a support case for that? The second question is, can I restore the logs that went to archiving due to a maxsize issue to a searchable index again?
Hi @Numb78  Im not sure this config will take effect, because SC4S uses the /event endpoint (unless you have overwritten this?) This blog article from Splunk  states "We want to ensure this is prop... See more...
Hi @Numb78  Im not sure this config will take effect, because SC4S uses the /event endpoint (unless you have overwritten this?) This blog article from Splunk  states "We want to ensure this is properly parsed before it gets to Splunk, as timestamp processing is bypassed (by default) with the /event HEC endpoint used by SC4S." I think there must be a configuration on the SC4S that needs applying which is different between the TCP and UDP ingestion.   Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @livehybrid I've also faced a similar problem. I want to specify the color corresponding to the field value in the map view, but the modification I made doesn't take effect. Could you please help ... See more...
Hi @livehybrid I've also faced a similar problem. I want to specify the color corresponding to the field value in the map view, but the modification I made doesn't take effect. Could you please help me check it?         { "type": "splunk.map", "options": { "center": [ -3.337953961431438, 79.98046874997863 ], "zoom": 2, "showBaseLayer": true, "layers": [ { "type": "bubble", "latitude": "> primary | seriesByName('latitude')", "longitude": "> primary | seriesByName('longitude')", "bubbleSize": "> primary | frameWithoutSeriesNames('_geo_bounds_east', '_geo_bounds_west', '_geo_bounds_north', '_geo_bounds_south', 'latitude', 'longitude') | frameBySeriesTypes('number')", "dataColors": " > primary | seriesByName('status') | matchValue('colorMatchConfig')" } ] }, "dataSources": { "primary": "ds_PHhx1Fxi" }, "context": { "colorMatchConfig": [ { "match": "high", "value": "#FF0000" }, { "match": "low", "value": "#00FF00" }, { "match": "critical", "value": "#0000FF" } ] }, "containerOptions": {}, "showProgressBar": false, "showLastUpdated": false }
Change the init block (solution updated) <init> <set token="rangeColors">"0x118832","0xd41f1f"</set> </init>  
Hi @msatish  It looks like you need to "Rebuild Forwarder Assets". This can be done by going to Cloud Monitoring Console > Forwarders > Forwarder Monitoring Setup. and clicking on the "Rebuild Forwa... See more...
Hi @msatish  It looks like you need to "Rebuild Forwarder Assets". This can be done by going to Cloud Monitoring Console > Forwarders > Forwarder Monitoring Setup. and clicking on the "Rebuild Forwarder Assets" button. I'd also recommend checking out the Review the Forwarder Monitoring Setup page docs which has more info about this and how to view/manage your forwarders via the CMC.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing