All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @kzjbry1 , good for you, see next time! let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking. Giuseppe P.S.: Karma Poi... See more...
Hi @kzjbry1 , good for you, see next time! let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking. Giuseppe P.S.: Karma Points are appreciated
Hi @krishna63032 , where do you located apps to deploy? It seems that you located the apps to deploy in two folders. they must be located only in manager-apps and not in master-apps, this location... See more...
Hi @krishna63032 , where do you located apps to deploy? It seems that you located the apps to deploy in two folders. they must be located only in manager-apps and not in master-apps, this location is deprecated and not present in the last versions. Ciao. Giuseppe
Hi @rahulkumar , adpat my hint to your requirements: in props.conf: [source::http:logstash] TRANSFORMS-00 = securelog_set_default_metadata TRANSFORMS-01 = securelog_override_raw in transforms.con... See more...
Hi @rahulkumar , adpat my hint to your requirements: in props.conf: [source::http:logstash] TRANSFORMS-00 = securelog_set_default_metadata TRANSFORMS-01 = securelog_override_raw in transforms.conf: [securelog_set_default_metadata] INGEST_EVAL = host := json_extract(_raw, "host.name") [securelog_override_raw] INGEST_EVAL = _raw := json_extract(_raw, "message")  Ciao. Giuseppe
When i push configuration bundle through cluster master getting below error.. Please suggest on this.    
Hi @onthakur , you can use something like this: <your_search> | stats dc(service) AS service_count values(service) AS service values(url) AS url BY transaction_id | eval ... See more...
Hi @onthakur , you can use something like this: <your_search> | stats dc(service) AS service_count values(service) AS service values(url) AS url BY transaction_id | eval status1=if(service_count=2 OR service_count=1 AND service="service1","yes","not"), status2=if(service_count=2 OR service_count=1 AND service="service2","yes","not") | table transaction_id url status1 status2 Ciao. Giuseppe
Hi @arunkuriakose , you have to migrate stand alone SHs to a cluster you have to follow the instructions at https://docs.splunk.com/Documentation/Splunk/9.4.0/DistSearch/Migratefromstandalonesearchh... See more...
Hi @arunkuriakose , you have to migrate stand alone SHs to a cluster you have to follow the instructions at https://docs.splunk.com/Documentation/Splunk/9.4.0/DistSearch/Migratefromstandalonesearchheads My special hint is to put much attention to the ES, because it requires a special installation on an SH Cluster: install and configure the Deployer, take all the apps from the SHs and put them on the Deployer, install ES on the Deployer, configure SHs as cluster, deploy apps from the Deployer. the best approach is that you did all the configurations in ES in a dedicated custom app, not in the ES apps, so you could install from scratch the ES on the Deployer and then deploy all the customization contained in the custom app. Ciao. Giuseppe
Currently I am adopting the same deployer to two different search head cluster and would like to remove it from one of the clusters. However, I cannot find any official documentation related to it. C... See more...
Currently I am adopting the same deployer to two different search head cluster and would like to remove it from one of the clusters. However, I cannot find any official documentation related to it. Could anyone tell me how to do it? Thank you so much
I think I understand the essence of the challenge.  Data analytics solution all depends on data characteristics.  Can you describe data further?  For example, the alternative field names, do they app... See more...
I think I understand the essence of the challenge.  Data analytics solution all depends on data characteristics.  Can you describe data further?  For example, the alternative field names, do they appear in the two different sources?  In other words, is there a relationship like this? index=email source=/var/logs/esa_0.log index=cyber source=/varlogs/fe01.log sender, recipient, subject, ...  suser, duser, msg, ... Such relationship can improve search by not using too many OR, which usually decreases efficiency.  On the other hand, even if such relationships exist, if suser, duser, subject, ... do not always exist in the same event, your search will not satisfy all filters.  As @PickleRick says, in that case you will have to sacrifice efficiency and fetch all events then filter. However, you have already clarified that except attachments,  sender, recipient, subject, etc., always exist, so do suser, duser, msg, and so on.  This means you can take advantage of those always-on fields. Now, to the bottom of the challenge.  Yes, you can do that.  But you need to change token strategy a little.  For this, we will single out the token for attachments from the rest. Just to distinguish this token, I call it attachments_tok, and set up Name-Value pairs (Label-Value in Dashboard Studio parler) like these: Name Value Any * filename1 attachments = filename1 filename2 attachments = filename2 ...   Once attachment_tok is set up, reorganize the search like this:   (index=email source=/var/logs/esa_0.log ($attachments_tok$) sha256=$hash$ sender="$sender$" recipient="$recipient$" subject="$subject$" message-id="$email_id$" from-header="$reply_add$") OR (index=cyber source=/varlogs/fe01.log suser="$sender$" duser="$recipient$" msg="$subject$" id="'<$email_id$>'" ReplyAddress="$reply_add$")    Hope this helps.
Hello, I am currently trying to deploy a single deployer across two different search head clusters but am having trouble finding detailed steps on how to do this. I have used the same cluster label a... See more...
Hello, I am currently trying to deploy a single deployer across two different search head clusters but am having trouble finding detailed steps on how to do this. I have used the same cluster label and secret for both clusters. To differentiate the clusters, I attempted to assign different captains as follows: For Cluster A bootstrap shcluster-captain -servers_list "https://cluster_A_IP:8089, https://cluster_A_IP:8089, https://cluster_A_IP:8089" For Cluster B bootstrap shcluster-captain -servers_list "https://cluster_B_IP:8089, https://cluster_B_IP:8089, https://cluster_B_IP:8089" I am unsure if this setup correctly separates the two clusters while using the same deployer. Could you provide guidance on whether this approach is effective or suggest an alternative method? Thank you so much
Hi Team We have a deployment with 3 standalone search heads . One of them have ES running on it. We are planning to introduce a new server as a deployer and make this 3 search head clustered.  Ques... See more...
Hi Team We have a deployment with 3 standalone search heads . One of them have ES running on it. We are planning to introduce a new server as a deployer and make this 3 search head clustered.  Question: 1. Is it possible to add these exisitng search heads to a cluster or should we copy all configs then create new search heads and copy the configs to all? If this is the only possibility what are the recommendations and challenges ? Can we take a backup of  full /etc/apps  and then deploy new search heads-> add to cluster-> replicate /etc/apps. Is this approach?   Any heads up will be appreciated 
We tried below confs     Navigate to $SPLUNK_HOME/etc/system/local/. Edit (or create) server.conf   [general] http_proxy = http://myinternetserver01.mydomain.com:4443 https_proxy = http... See more...
We tried below confs     Navigate to $SPLUNK_HOME/etc/system/local/. Edit (or create) server.conf   [general] http_proxy = http://myinternetserver01.mydomain.com:4443 https_proxy = https://myinternetserver01.mydomain.com:4443 proxy_user = username proxy_password = mysecurepassword   Also tried below conf [general] http_proxy = http://username:mysecurepassword@myinternetserver01.mydomain.com:4443 https_proxy = https://username:mysecurepassword@myinternetserver01.mydomain.com:4443 But both are not working.    
I have tried this in following way index="index1" | search "slot" | rex field=msg "(?<action>added|removed)" | eval added_time=if(action="added",strftime(_time, "%H:%M:%S"),null()) | eval removed_... See more...
I have tried this in following way index="index1" | search "slot" | rex field=msg "(?<action>added|removed)" | eval added_time=if(action="added",strftime(_time, "%H:%M:%S"),null()) | eval removed_time=if(action="removed",strftime(_time, "%H:%M:%S"),null()) | sort 0 _time | streamstats max(added_time) as added_time latest(removed_time) as removed_time by host slot | eval downtime=if(isnotnull(added_time) AND isnotnull(removed_time), strptime(removed_time, "%H:%M:%S") - strptime(added_time, "%H:%M:%S"), 0)   but the issue is, downtime is not getting calculated and its printing 0 always.   need help in fixing this.
Thanks @ITWhisperer  for the reply.   the downtime field is not getting populated only. I tried converting it to epoch time and still same.   can you please look into it once?
Did anyone ever come up with an answer to this question? In our installation, the app names show up in the "Apps" pull-down list. Some of those apps have custom icons which also show up in the in bot... See more...
Did anyone ever come up with an answer to this question? In our installation, the app names show up in the "Apps" pull-down list. Some of those apps have custom icons which also show up in the in both the "Apps" pull-down list and in the app navigation bar. But under no circumstances does an app name ever appear in the app navigation bar. Custom icons do, for those apps which have one. For everything else, we get the default green and white "App" icon on the far right side of the app navigation bar. But no app name text — ever. Something is broken in our environment that's preventing app names from showing in the app navigation bar. It's been that way for years, and I'd really like to know how to fix that.
Are you saying you've installed Splunk Enterprise AND Universal Forwarder on the same host?  If you run netstat -an | grep 8000 is there a process in LISTEN state on that port? Are you running https... See more...
Are you saying you've installed Splunk Enterprise AND Universal Forwarder on the same host?  If you run netstat -an | grep 8000 is there a process in LISTEN state on that port? Are you running https or http?  
Thanks for your help. Splunk is running but its not communicating with 127.0.0.1:8000.  Created to make that determination. As far as the wf both splunk services and splunk forwarder are both running... See more...
Thanks for your help. Splunk is running but its not communicating with 127.0.0.1:8000.  Created to make that determination. As far as the wf both splunk services and splunk forwarder are both running. Maybe uninstall it?  Should I mention I have splunk unbuntu installed in my VM.  Thanks 
The switch to IP helped resolve my migration issues, Thanks!
So far, you have only mentioned 2 dimensions, cost and time - what else are you breaking your statistics down by?
If you want / need help, all discussions will be public here in answers. Here is @yuanliu ’s excellent description what and how you need to describe your issue, so we can help you after that. Withou... See more...
If you want / need help, all discussions will be public here in answers. Here is @yuanliu ’s excellent description what and how you need to describe your issue, so we can help you after that. Without basic information it’s really hard and frustrating to make guesses to solve your problem. ——8<______ Let me repeat the four commandments of asking answerable questions in this forum: Illustrate data input (in raw text, anonymize as needed), whether they are raw events or output from a search (SPL that volunteers here do not have to look at). Illustrate the desired output from illustrated data. Explain the logic between illustrated data and desired output without SPL. If you also illustrate attempted SPL, illustrate actual output and compare with desired output, explain why they look different to you if that is not painfully obvious.
Hi, Thanks for your response, sadly I am still not able to achieve this. Can we connect over gmail or any other platform where I describe you  scenario and get it done.  I don't understand that w... See more...
Hi, Thanks for your response, sadly I am still not able to achieve this. Can we connect over gmail or any other platform where I describe you  scenario and get it done.  I don't understand that what needs to replace below value with: <search filters for website status=ok> and <search for website status = NOT OK> my index name is main and sourcetype is "web_ping".