All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi ! I am facing a very similar issue : after adding a new field to my KV store automatic lookup doesn't work and never returns my new field in my events but I can manually retrieve it with this que... See more...
Hi ! I am facing a very similar issue : after adding a new field to my KV store automatic lookup doesn't work and never returns my new field in my events but I can manually retrieve it with this query : | inputlookup my_kvstore but that one : index=my_index | lookup my_kvstore... throws an error : [comma separated of my indexers list] phase_0 - Streamed search execute failed because: Error in 'lookup' command: Cannot find the destination field 'my_new_field' in the lookup table 'my_kvstore'.. still, with this query : index=my_index | lookup local=true my_kvstore... I can retrieve my new field... Regards,  
| stats values(Skills) as Skills count(eval(TrainingCompleted=="Yes")) as TrainingCompleted count as "Team Appearance" by TeamName EmploymentType | eval Skills=mvjoin(Skills,",") | eval "TrainingComp... See more...
| stats values(Skills) as Skills count(eval(TrainingCompleted=="Yes")) as TrainingCompleted count as "Team Appearance" by TeamName EmploymentType | eval Skills=mvjoin(Skills,",") | eval "TrainingCompleted%"=round(100*TrainingCompleted/'Team Appearance',0)."%"
I have a source file with comma separated fields, I have to create a table where I need combine and show statistics of different products. Data file fields - TeamName,EmploymentType,Skills,TrainingC... See more...
I have a source file with comma separated fields, I have to create a table where I need combine and show statistics of different products. Data file fields - TeamName,EmploymentType,Skills,TrainingCompleted Source File data: TeamA,Contract,Java,Yes TeamA,Contract,DotNet,No TeamA,Contract,C++,Yes TeamA,Contract,ReactJS,No TeamB,Permanent,Java,Yes TeamB,Permanent,DotNet,No TeamB,Permanent,C++,Yes TeamB,Permanent,ReactJS,No TeamC,Contract,Java,Yes TeamC,Contract,DotNet,No TeamC,Contract,C++,Yes TeamC,Contract,ReactJS,No TeamD,Permanent,Java,Yes TeamD,Permanent,DotNet,No TeamD,Permanent,C,Yes TeamD,Permanent,ReactJS,No TeamE,Contract,Java,Yes TeamE,Contract,DotNet,No TeamE,Contract,Java,Yes Now the requirement is to create a table view of source file with below columns: TeamName EmploymentType Skills TrainingCompleted Team Appearance Training Completion% TeamA Contract Java,DotNet,ReactJS,C++ 2 4 50% TeamB Permanent Java,DotNet,ReactJS,C++ 2 4 50% TeamC Contract Java,DotNet,ReactJS,C++ 2 4 50% TeamD Permanent Java,DotNet,ReactJS,C 2 4 50% TeamE Contract Java,Dotnet 2 3 67%   Please give me the exact query. I am beginner in Splunk. 
The match function expects a regex string as the second parameter. In regex "*" is a modifier meaning zero or more of the previous match item. If "*" appears at the beginning of the regex expression,... See more...
The match function expects a regex string as the second parameter. In regex "*" is a modifier meaning zero or more of the previous match item. If "*" appears at the beginning of the regex expression, there is no previous match item, hence the error.
Hi Team Getting this error message frequently in internal logs of Splunk. Error in 'where' command: The expression is malformed. An unexpected character is reached at '*) OR match(indicator, *_... See more...
Hi Team Getting this error message frequently in internal logs of Splunk. Error in 'where' command: The expression is malformed. An unexpected character is reached at '*) OR match(indicator, *_ip) OR match(indicator, *_host)) Any hints will be appreciated. Thanks in advance  
Hi at all, I found that the information in my Monitoring Console (Splunk version 9.1.1) about Replication Factor is wrong, is there anyone that experienced the same thing: in [Monitoring Console > ... See more...
Hi at all, I found that the information in my Monitoring Console (Splunk version 9.1.1) about Replication Factor is wrong, is there anyone that experienced the same thing: in [Monitoring Console > Overview of Splunk Enterprise 9.1.1] is displayed Replication Factor = 3 but I configured Replication Factor =2 (it's a multisite, so origin=1, total=2). Is it maybe the Search Head Cluster Replication Factor (that's 3) or simply a displ? Thank you for your advice. Ciao. Giuseppe
Hi Team, We are planning to integrate Splunk with the ticketing system BMC Helix. So, we would like to know if it's possible to raise a ticket in the BMC Helix automatically if the knowledge objects... See more...
Hi Team, We are planning to integrate Splunk with the ticketing system BMC Helix. So, we would like to know if it's possible to raise a ticket in the BMC Helix automatically if the knowledge objects are triggered? If yes what is the way to do that.   Thanks in Advance Siva.  
I have a log feed which was configured by a previous employee. Documentation does not exist, of course... The feed stopped once we migrated indexers. I checked the deployment server and there does n... See more...
I have a log feed which was configured by a previous employee. Documentation does not exist, of course... The feed stopped once we migrated indexers. I checked the deployment server and there does not seem to be any apps on this server where the feed exists which ingest this feed. Then, we added in the old indexer to the architecture and the feed started working again! When I check these events in Search & Reporting, I can see the feed is only coming in via this legacy indexer by checking the splunk_server field. I logged into the legacy indexer via CLI and used btool on inputs for the index, source and sourcetypes, no matches. Also struggling to find anything useful in the _internal events via search head GUI. Both indexers are in a cluster and so the config should be identical, but the events only come in via the legacy indexer. How can I find how this feed is configured?
hi @sekhar463, please try this: index=_internal sourcetype=splunkd source="/opt/splunk/var/log/splunk/metrics.log" group=tcpin_connections [ search index="INDEX1" "\" (puppet-agent OR puppet) AND *... See more...
hi @sekhar463, please try this: index=_internal sourcetype=splunkd source="/opt/splunk/var/log/splunk/metrics.log" group=tcpin_connections [ search index="INDEX1" "\" (puppet-agent OR puppet) AND *Error* AND "/Stage[" | rename host AS hostname | fields hostname ] | table hostname sourceIp | dedup hostname there was a wrong parenthesis. Ciao. Giuseppe
Hi @vijreddy30, let me understand: do you want to mask some events or part of event in permanent way or in reversible way? if in reversible way, you have to preprocess your data using a script and ... See more...
Hi @vijreddy30, let me understand: do you want to mask some events or part of event in permanent way or in reversible way? if in reversible way, you have to preprocess your data using a script and a certificate and then index your data using Splunk. If you only want to anonymize a part of your data, follow the instructions at https://www.splunk.com/en_us/blog/learn/data-anonymization.html or https://docs.splunk.com/Documentation/Splunk/latest/Data/Anonymizedata?_gl=1*b5bay4*_ga*MTY1NjIzMDM3OC4xNjk3MjA1MzU0*_ga_GS7YF8S63Y*MTY5OTUyODU5My41MjcuMS4xNjk5NTI4ODA3LjI3LjAuMA..*_ga_5EPM2P39FV*MTY5OTUyODU2NC42NDAuMS4xNjk5NTI4OTg0LjAuMC4w&_ga=2.165103034.468842275.1697205354-1656230378.1697205354&_gac=1.117944699.1697439287.Cj0KCQjwm66pBhDQARIsALIR2zDDpMo42f4nQY5ylRFnUfEyW_h0bbBBKVDgM2rBU1cuYdYxGqfUTWkaAjkxEALw_wcB#Anonymize_data_through_a_sed_script Ciao. Giuseppe
Hi @smithy001 , sorry but I don't understand your question: it's usual to use faster and more expensive storage for Hot/Warm buckets and slower and less expensive storage for Cold buckets. What's ... See more...
Hi @smithy001 , sorry but I don't understand your question: it's usual to use faster and more expensive storage for Hot/Warm buckets and slower and less expensive storage for Cold buckets. What's your qyestion? You should analyze your searches to understand what's the retentio to apply to warm buckets to have more than 85/90% of searches, in this way, the use of a slower storage will not affect so much your performances. Usually it's used one month, but it depends on your use. Ciao. Giuseppe
I am trying to integrate this solution into Splunk but I am finding problems. The most relevant as far is the number of retrieved events. I use the official event collector app Radware CWAF Event Co... See more...
I am trying to integrate this solution into Splunk but I am finding problems. The most relevant as far is the number of retrieved events. I use the official event collector app Radware CWAF Event Collector | Splunkbase It works via API user and I found that the number of events in Splunk doesn't match with the events in cloud console. After opening a support ticket with Radware they told me that the problem is the "pulling rate" for API configuration in my Splunk. I have been trying to find how to configure this "pulling rate" in Splunk but I found nothing. Do you know how to solve this parameter or how do you solve this integration? This is exactly what they told me: Our cloudops team checked to see if there are any differences between the CWAF and the logs that are sent to your SIEM. They found that there is a queue of logs that are waiting to be pulled by your SIEM. Therefore, we do not have any evidence that the issue is with the SIEM infrastructure. For example, if we send 10 events per minute and the SIEM is pulling 5 per minute, this will create a queue of logs. Unfortunately, we cannot support customer-side configuration. It might be more helpful to consult with the support team for the SIEM you are using, as the interval might be the "Pulling Rate."  
if we configure a fast volume for hot/warm and slower spindles for cold and set maxVolumeDataSizeMB to enforce sizes. can you see any situation where cold would file but hot/warm would still have sp... See more...
if we configure a fast volume for hot/warm and slower spindles for cold and set maxVolumeDataSizeMB to enforce sizes. can you see any situation where cold would file but hot/warm would still have space?
OK. This is definitely not what XML windows events look like.
"rf/sf per site being 1 and the total rf/sf being 2" What do you mean by that? origin:1,total:2? Or site1:1,site2:1,total:2? Or something else?
I have a client creating a new system that will have 2 sites with rf/sf per site being 1 and the total rf/sf being 2. Each site will have 3+ indexers. My question is how this effects bucket distribu... See more...
I have a client creating a new system that will have 2 sites with rf/sf per site being 1 and the total rf/sf being 2. Each site will have 3+ indexers. My question is how this effects bucket distribution over the indexers if you leave maxbucket as the default of 3 and is there any performance implications They're going rf/sf as 1 due to disk costs. Thanks in advance
2023-10-25 10:56:46,709 WARN pool-1-thread-1 com.veeva.bpr.batchrecordprint.scheduledTasks - Header Field Name: MOM_Caution_e_1, value is out of Bounds using beginIndex:608, endIndex:684 from line: ... See more...
2023-10-25 10:56:46,709 WARN pool-1-thread-1 com.veeva.bpr.batchrecordprint.scheduledTasks - Header Field Name: MOM_Caution_e_1, value is out of Bounds using beginIndex:608, endIndex:684 from line: 2023-10-25 10:56:46,709 WARN pool-1-thread-1 com.veeva.Kpr.batchrecordprint.scheduledTasks - 02000011831199QD06620   my requirement is encrypt the WARN pool-1-........................ record the source file   please help me the process
Thanks, is it possible to  if field src_sg_info does not exist then "No active VPN user" in the same timechart. 
getting error  Error in 'search' command: Unable to parse the search: unbalanced parentheses using below search  index=_internal sourcetype=splunkd source="/opt/splunk/var/log/splunk/metrics.lo... See more...
getting error  Error in 'search' command: Unable to parse the search: unbalanced parentheses using below search  index=_internal sourcetype=splunkd source="/opt/splunk/var/log/splunk/metrics.log" group=tcpin_connections [ search index="INDEX1" \ (puppet-agent OR puppet)) AND *Error* AND "/Stage[" | rename host AS hostname | fields hostname ] | table hostname sourceIp | dedup hostname    
Use single quotes around field names when on the right of an assignment | makeresults count=3 | streamstats count | eval a.1 = case(count=1, 1, count=2, null(), count=3, "null") | eval a.2 = case(co... See more...
Use single quotes around field names when on the right of an assignment | makeresults count=3 | streamstats count | eval a.1 = case(count=1, 1, count=2, null(), count=3, "null") | eval a.2 = case(count=1, "null", count=2, 2, count=3, null()) | eval a3 = case(count=1, null(), count=2, "null", count=3, 3) | table a* | foreach mode=multifield * [eval <<FIELD>>=if('<<FIELD>>'="null",null(),'<<FIELD>>')]