All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I am trying to pair down the list of ciphers we are using.  When I remove AES256-GCM-SHA384 I begin to get the below errors on our Search Head Cluster.   02-24-2023 16:17:35.187 +0000 WARN SSLCom... See more...
I am trying to pair down the list of ciphers we are using.  When I remove AES256-GCM-SHA384 I begin to get the below errors on our Search Head Cluster.   02-24-2023 16:17:35.187 +0000 WARN SSLCommon [121742 TcpOutEloop] - Received fatal SSL3 alert. ssl_state='SSLv2/v3 read server hello A', alert_description='handshake failure'. 02-24-2023 16:17:35.187 +0000 ERROR TcpOutputFd [121742 TcpOutEloop] - Connection to host=SH_IP_REMOVED:8999 failed. sock_error = 0. SSL Error = error:14077410:SSL routines:SSL23_GET_SERVER_HELLO:sslv3 alert handshake failure   In server.conf, web.conf, inputs.conf and outputs.conf I have the below ciphers.  Once I remove AES256-GCM-SHA384.  The errors begin.   cipherSuite = ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:AES256-GCM-SHA384
When one configures the indexer cluster for SmartStore, does each indexer get its own S3 bucket?  Or is there just one very large S3 bucket and all indexers write into the same S3 bucket (separated b... See more...
When one configures the indexer cluster for SmartStore, does each indexer get its own S3 bucket?  Or is there just one very large S3 bucket and all indexers write into the same S3 bucket (separated by indexer GUID or something like that)?
Using ingestion actions, one can write a copy of events to an S3 bucket prior to indexing.  Can one search these S3 buckets with Splunk even though they were not ingested (it'd be slow, but could be ... See more...
Using ingestion actions, one can write a copy of events to an S3 bucket prior to indexing.  Can one search these S3 buckets with Splunk even though they were not ingested (it'd be slow, but could be useful for historical searches)?
I need to migrate my current ES installation from a VM to a physical host, due to performance issues in the virtual instance.  Because of internal policies, I cannot simply clone the system via rsy... See more...
I need to migrate my current ES installation from a VM to a physical host, due to performance issues in the virtual instance.  Because of internal policies, I cannot simply clone the system via rsync, as the new physical box must have a new name to indicate it isn't a VM. I tried copying the /opt/splunk/etc/system subdirectory of the new server to a backup location, then using rsync to replicate the /opt/splunk/etc subdirectory structure from the functional VM to the new server. I copied the backup of system back into place, except for the server.conf which I merged the two together. Tons of errors. Tons of missing data in the ES dashboards. What am I missing? Thanks in advance for any suggestions.
      index=mail | lookup email_domain_whitelist domain AS RecipientDomain output domain as domain_match | where isnull(domain_match) | lookup all_email_provider_domains domain AS Recip... See more...
      index=mail | lookup email_domain_whitelist domain AS RecipientDomain output domain as domain_match | where isnull(domain_match) | lookup all_email_provider_domains domain AS RecipientDomain output domain as domain_match2 | where isnotnull(domain_match2) | stats values(recipient) as recipient values(subject) as subject earliest(_time) AS "Earliest" latest(_time) AS "Latest" by RecipientDomain sender | where mvcount(recipient)=1 | eval subject_count=mvcount(subject) | sort - subject_count | convert ctime("Latest") | convert ctime("Earliest")     i would like to include in the results if there are any attachments in the email, show me the attachment name and size of the attachment in MB/GB.   Is this possible ?   Adding on , also i have list of suspicious keywords to in a list in lookup editor called suspicoussubject_keywords.   can you include the query to lookup for this keyword in subject and then display results?  
Hi, When I inherited this deployment, there were a lot of skipped searches. The 3 node SHC was under resourced, but with some cron skewing, tuning the limits, reducing zombie scheduled searches, an... See more...
Hi, When I inherited this deployment, there were a lot of skipped searches. The 3 node SHC was under resourced, but with some cron skewing, tuning the limits, reducing zombie scheduled searches, and optimizing some searches... I reduced a lot.  However some intensive apps were still causing skipped searches. So we added a 4th node to the SHC, and it was running smoothly without a skipped search. Now recently, I started seeing a persistent skipped search warning.  Nothing new was added (scheduled searches), resource usage looked good,  but I kept seeing >>"The maximum number of concurrent running jobs for this historical scheduled search on this cluster has been reached ". I could see the jobs that were skipped, but I am not finding a way to see which jobs piled up during a time interval that caused the skipped search and the warning. I did notice some of the skipped searches were throwing warnings and errors.  I am wondering if it caused a hanging job so it added to the count, and created a skipping loop. IF any one has a way to see the scheduled searches that accumulate and cause this error and skipping, PLEASE advise. Thank you!
I have the below code <input type="checkbox" token="clear_button" searchWhenChanged="true"> <label></label> <change> <unset token="form.clear_button"></unset> <unset token="form.Name_Token"></... See more...
I have the below code <input type="checkbox" token="clear_button" searchWhenChanged="true"> <label></label> <change> <unset token="form.clear_button"></unset> <unset token="form.Name_Token"></unset> <unset token="form.SO_Token"></unset> <unset token="form.L2SO_Token"></unset> <unset token="form.LOB_Token"></unset> <unset token="form.Func_Token"></unset> </change> <delimiter> </delimiter> <choice value="clear">Clear</choice> </input> When hovering over the word "Clear" to the right of the checkbox it is clickable. I just want the checkbox to be clickable - not the text and some whitespace to the right. Is this possible? Thanks
hi all, how to extract  this  message  bgp_connect_start: connect 2403:df40:0:16::3 (Internal AS 14630) (instance master): No route to host   as new fields as BGP connection fields      BGP_CONNE... See more...
hi all, how to extract  this  message  bgp_connect_start: connect 2403:df40:0:16::3 (Internal AS 14630) (instance master): No route to host   as new fields as BGP connection fields      BGP_CONNECT_FAILED: bgp_connect_start: connect 2403:df40:0:16::3 (Internal AS 14630) (instance master): No route to host
Hi. We have a use case where we would like to maintain a single information field on some of our entities. The field name is Application Service, and it is fairly static - but not quite, so it mi... See more...
Hi. We have a use case where we would like to maintain a single information field on some of our entities. The field name is Application Service, and it is fairly static - but not quite, so it might change over time. This field can logically only have one value, and the problem with the scheduled import, is, that if the value changes, there will be two values for this field name. So we are trying to create a Python script, that will maintain this value, so if it changes the still will only be one value - the old one is overwritten.  The problem is that when we try to change this vallue and posts to Splunk, we get a code 200 returnkode, but the value has not changed.   This is an example code: (Please don't kill me, this is just a test, and I'm not really a Python developer)   def update_splunk_rest(key, jsonDict): url = base_url + "/servicesNS/nobody/SA-ITOA/itoa_interface/entity/" + key + "?is_partial_data=0" authHeader = {'Authorization': token} r = requests.post(url, headers=authHeader, json=jsonDict, verify=False) print(r.text) return r splk_entity = get_splunk_rest("2b4566fb-367e-44ec-b068-d6541a2024e6") print(splk_entity.status_code) entity = splk_entity.json() print(entity) title = entity['title'] info = entity['informational'] print(info) keys = info['fields'] values = info['values'] print(keys) print(values) i=0 for field in keys: if field == 'Application Service': break i=i+1 values[i] = "Dette er Las test" print("=============================================================") print(entity) #payload = {"_key":"821bd2f7-83d6-47a9-a753-60c04523d57e","title":title,"informational":{"fields":keys, "values":values}} #print(payload) response = update_splunk_rest("2b4566fb-367e-44ec-b068-d6541a2024e6", entity) print(response.status_code)   The entity is changed just before the post (update_splunk_rest), that does a post with is_partiel_data=0, as we are changing the entire record from ITSI. Has anyone else had this problem and found a solution? Kind regards Las
   i have two searches in different indexes index itsi_grp* and Index B, and these two searches will have common fieldvalues in hostname and problemdesc fields name index itsi_grp* using these two ... See more...
   i have two searches in different indexes index itsi_grp* and Index B, and these two searches will have common fieldvalues in hostname and problemdesc fields name index itsi_grp* using these two fields i have to get a "id" field from  index B   For example Index itsi_grp* Hostname   Problemdesc   name AAA                CPU issue          abc Index B Host  Problem ID AAA   CPU          555   My result should like below Host            problem               ID           incindetnumber     name AAA              CPU                       555       *******                       abc   Am using the join command to get the results but its not returning any values , even the data is available in both indexes, used below query, index=itsi_grp*  |search name="abc"|rename Hostname as Host, Problemdesc as Problem |join Hostname Problem [search index=B sourcetype=abc] |table Host, Problem, incindetnumber,ID, name   For fetching incindet, i will be using the lookup. The only problem is while using the join command am not getting the results, its returning 0 statistics.  Could you please help me if am using the join command properly
Hi, I need to color the value in update.version column when this is greater than value in version column how can I do this? Thank you
Hi Splunkers,  I'm using Splunk App for Unix and Linux for monitoring some CentOS parameters.  All scripts work fine when I run them locally on UF. However, some of them (e.g. vmstat, who) don't di... See more...
Hi Splunkers,  I'm using Splunk App for Unix and Linux for monitoring some CentOS parameters.  All scripts work fine when I run them locally on UF. However, some of them (e.g. vmstat, who) don't display results - just the headers(parameter descriptions).  This is script output from UF (who) below is the screenshot from Splunk's GUI I tried to play with soructypes' linebreaker, but it didn't help.  Have anyone encountered similar issue?  I would be grateful for your help. regards, Sz
I've been trying to solve this problem for days now with no success. Maybe I can find ultimate salvation here.  I have a single index where I need to run 2 queries.  First query finds all... See more...
I've been trying to solve this problem for days now with no success. Maybe I can find ultimate salvation here.  I have a single index where I need to run 2 queries.  First query finds all hosts that generate logs for a particular app called APP.  I need to count totals. Second query searches for a hosts that were scanned by the APP. Problem: I need to deduct hosts detected in Query 2 from hosts found in Query 1. That will generate a list of hosts that were potentially not scanned in a selected period of time. Query 1: index=demon source="/opt/app/logs/*" Query 2: index=demon source="*scan.log" "scan Finished" From what I learnt so far |multisearch appears to be the best candidate however when I run the below query I only get 1 variable listed, I guess because of host that can be attributed only once.   I'm sure there are multiple ways of achieving this goal.   Thanks  
HI Team, Greetings for the DAY, I have one prod ITSI environment, i have managed to upgrade my test environment as same version of prod one. i need help and steps on how to backup prod kvstore an... See more...
HI Team, Greetings for the DAY, I have one prod ITSI environment, i have managed to upgrade my test environment as same version of prod one. i need help and steps on how to backup prod kvstore and restore in my test environment. Please help me asap.
お世話になります。 現在、あるログの集計をしております。 接続元IPアドレスと、接続日時をキーにして、初回接続日から10日間経過後も接続しているログのみを抽出出来るようにしたいですが、上手く抽出することが出来ません。 ※合計接続日数は初回接続日~最終接続日の間で接続された日数をカウントした数です。 このようなデータを抽出するサーチ文をご教授いただけると幸いです。 ■サンプルデ... See more...
お世話になります。 現在、あるログの集計をしております。 接続元IPアドレスと、接続日時をキーにして、初回接続日から10日間経過後も接続しているログのみを抽出出来るようにしたいですが、上手く抽出することが出来ません。 ※合計接続日数は初回接続日~最終接続日の間で接続された日数をカウントした数です。 このようなデータを抽出するサーチ文をご教授いただけると幸いです。 ■サンプルデータ 接続IPアドレス 接続日 1.0.0.0 2023-01-01 10:35:45 1.0.0.0 2023-01-03 12:33:10 1.0.0.0 2023-01-08 09:35:06 1.0.0.0 2023-01-11 21:18:29 2.0.0.0 2023-01-01 23:32:11 2.0.0.0 2023-01-05 04:55:15 2.0.0.0 2023-01-10 19:35:24   ■出力結果イメージ 接続IPアドレス 初回接続日時 最終接続日時 接続日数 1.0.0.0 2023-01-01 10:35:45 2023-01-11 21:18:29 4  
Hello, Help me please. I have a REST API datasource get data ( JSON ) in main index something like this: ["user","domain\\user1","domain\\user2","domain\\user3"] ...  I'd like to create a searc... See more...
Hello, Help me please. I have a REST API datasource get data ( JSON ) in main index something like this: ["user","domain\\user1","domain\\user2","domain\\user3"] ...  I'd like to create a search which runs for all the users extracted from this JSON.  How it is possible to use all this values in another search?  thanks   
Is there a way in splunk that i can have a indicator or symbol that shows the different entry points something like above just a circle of when each waypoint is logged. 
Hello. I'm having some problem and I can't for the life of me figure out what goes wrong. I am running a search like this against two lookups (both lookup files has multiple columns): index=gateway... See more...
Hello. I'm having some problem and I can't for the life of me figure out what goes wrong. I am running a search like this against two lookups (both lookup files has multiple columns): index=gateway EventIDValue=gateway-check EventStatus=success | lookup assets_and_users.csv USER AS SourceUserName, ASSET AS EndpointDeviceName OUTPUTNEW USER, ASSET | lookup computer_objects.csv own_asset AS EndpointDeviceName OUTPUTNEW own_asset | where isnotnull(USER) OR isnotnull(ASSET) OR isnotnull(own_asset) AND own_asset!=EndpointDeviceName The idea is to check for a certain number of assets and users previously seen in our environment with the assets_and_users.csv lookup, and filter out assets that are currently managed by us with the computer_objects.csv lookup, so that I can see activity from the previously seen assets and users as well as assets not previously seen and that are not managed by us. However the first iteration of the search looked like this: index=vpn EventIDValue=gateway-check EventStatus=success | lookup assets_and_users.csv USER AS SourceUserName OUTPUTNEW USER | lookup computer_objects.csv own_asset AS EndpointDeviceName OUTPUTNEW own_asset | where isnotnull(USER) OR isnotnull(own_asset) AND own_asset!=EndpointDeviceName and that version gave me a couple thousand events. However, once I added the asset part as seen in the top query I got three events which doesn't make sense. I should if anything get more events than the first iteration (bottom query). Can someone spot where it goes wrong?
Hi, Getting below queue blocked and Errror in the HF.  don't know how to troubleshoot to fix this block queue issue.  can you help with the quick fix for this issue.     
When I run a search query I see that there are some fields which are present in interesting fields but not present in the event results. How is that achieved?