All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi at all, I found that the information in my Monitoring Console (Splunk version 9.1.1) about Replication Factor is wrong, is there anyone that experienced the same thing: in [Monitoring Console > ... See more...
Hi at all, I found that the information in my Monitoring Console (Splunk version 9.1.1) about Replication Factor is wrong, is there anyone that experienced the same thing: in [Monitoring Console > Overview of Splunk Enterprise 9.1.1] is displayed Replication Factor = 3 but I configured Replication Factor =2 (it's a multisite, so origin=1, total=2). Is it maybe the Search Head Cluster Replication Factor (that's 3) or simply a displ? Thank you for your advice. Ciao. Giuseppe
Hi Team, We are planning to integrate Splunk with the ticketing system BMC Helix. So, we would like to know if it's possible to raise a ticket in the BMC Helix automatically if the knowledge objects... See more...
Hi Team, We are planning to integrate Splunk with the ticketing system BMC Helix. So, we would like to know if it's possible to raise a ticket in the BMC Helix automatically if the knowledge objects are triggered? If yes what is the way to do that.   Thanks in Advance Siva.  
I have a log feed which was configured by a previous employee. Documentation does not exist, of course... The feed stopped once we migrated indexers. I checked the deployment server and there does n... See more...
I have a log feed which was configured by a previous employee. Documentation does not exist, of course... The feed stopped once we migrated indexers. I checked the deployment server and there does not seem to be any apps on this server where the feed exists which ingest this feed. Then, we added in the old indexer to the architecture and the feed started working again! When I check these events in Search & Reporting, I can see the feed is only coming in via this legacy indexer by checking the splunk_server field. I logged into the legacy indexer via CLI and used btool on inputs for the index, source and sourcetypes, no matches. Also struggling to find anything useful in the _internal events via search head GUI. Both indexers are in a cluster and so the config should be identical, but the events only come in via the legacy indexer. How can I find how this feed is configured?
hi @sekhar463, please try this: index=_internal sourcetype=splunkd source="/opt/splunk/var/log/splunk/metrics.log" group=tcpin_connections [ search index="INDEX1" "\" (puppet-agent OR puppet) AND *... See more...
hi @sekhar463, please try this: index=_internal sourcetype=splunkd source="/opt/splunk/var/log/splunk/metrics.log" group=tcpin_connections [ search index="INDEX1" "\" (puppet-agent OR puppet) AND *Error* AND "/Stage[" | rename host AS hostname | fields hostname ] | table hostname sourceIp | dedup hostname there was a wrong parenthesis. Ciao. Giuseppe
Hi @vijreddy30, let me understand: do you want to mask some events or part of event in permanent way or in reversible way? if in reversible way, you have to preprocess your data using a script and ... See more...
Hi @vijreddy30, let me understand: do you want to mask some events or part of event in permanent way or in reversible way? if in reversible way, you have to preprocess your data using a script and a certificate and then index your data using Splunk. If you only want to anonymize a part of your data, follow the instructions at https://www.splunk.com/en_us/blog/learn/data-anonymization.html or https://docs.splunk.com/Documentation/Splunk/latest/Data/Anonymizedata?_gl=1*b5bay4*_ga*MTY1NjIzMDM3OC4xNjk3MjA1MzU0*_ga_GS7YF8S63Y*MTY5OTUyODU5My41MjcuMS4xNjk5NTI4ODA3LjI3LjAuMA..*_ga_5EPM2P39FV*MTY5OTUyODU2NC42NDAuMS4xNjk5NTI4OTg0LjAuMC4w&_ga=2.165103034.468842275.1697205354-1656230378.1697205354&_gac=1.117944699.1697439287.Cj0KCQjwm66pBhDQARIsALIR2zDDpMo42f4nQY5ylRFnUfEyW_h0bbBBKVDgM2rBU1cuYdYxGqfUTWkaAjkxEALw_wcB#Anonymize_data_through_a_sed_script Ciao. Giuseppe
Hi @smithy001 , sorry but I don't understand your question: it's usual to use faster and more expensive storage for Hot/Warm buckets and slower and less expensive storage for Cold buckets. What's ... See more...
Hi @smithy001 , sorry but I don't understand your question: it's usual to use faster and more expensive storage for Hot/Warm buckets and slower and less expensive storage for Cold buckets. What's your qyestion? You should analyze your searches to understand what's the retentio to apply to warm buckets to have more than 85/90% of searches, in this way, the use of a slower storage will not affect so much your performances. Usually it's used one month, but it depends on your use. Ciao. Giuseppe
I am trying to integrate this solution into Splunk but I am finding problems. The most relevant as far is the number of retrieved events. I use the official event collector app Radware CWAF Event Co... See more...
I am trying to integrate this solution into Splunk but I am finding problems. The most relevant as far is the number of retrieved events. I use the official event collector app Radware CWAF Event Collector | Splunkbase It works via API user and I found that the number of events in Splunk doesn't match with the events in cloud console. After opening a support ticket with Radware they told me that the problem is the "pulling rate" for API configuration in my Splunk. I have been trying to find how to configure this "pulling rate" in Splunk but I found nothing. Do you know how to solve this parameter or how do you solve this integration? This is exactly what they told me: Our cloudops team checked to see if there are any differences between the CWAF and the logs that are sent to your SIEM. They found that there is a queue of logs that are waiting to be pulled by your SIEM. Therefore, we do not have any evidence that the issue is with the SIEM infrastructure. For example, if we send 10 events per minute and the SIEM is pulling 5 per minute, this will create a queue of logs. Unfortunately, we cannot support customer-side configuration. It might be more helpful to consult with the support team for the SIEM you are using, as the interval might be the "Pulling Rate."  
if we configure a fast volume for hot/warm and slower spindles for cold and set maxVolumeDataSizeMB to enforce sizes. can you see any situation where cold would file but hot/warm would still have sp... See more...
if we configure a fast volume for hot/warm and slower spindles for cold and set maxVolumeDataSizeMB to enforce sizes. can you see any situation where cold would file but hot/warm would still have space?
OK. This is definitely not what XML windows events look like.
"rf/sf per site being 1 and the total rf/sf being 2" What do you mean by that? origin:1,total:2? Or site1:1,site2:1,total:2? Or something else?
I have a client creating a new system that will have 2 sites with rf/sf per site being 1 and the total rf/sf being 2. Each site will have 3+ indexers. My question is how this effects bucket distribu... See more...
I have a client creating a new system that will have 2 sites with rf/sf per site being 1 and the total rf/sf being 2. Each site will have 3+ indexers. My question is how this effects bucket distribution over the indexers if you leave maxbucket as the default of 3 and is there any performance implications They're going rf/sf as 1 due to disk costs. Thanks in advance
2023-10-25 10:56:46,709 WARN pool-1-thread-1 com.veeva.bpr.batchrecordprint.scheduledTasks - Header Field Name: MOM_Caution_e_1, value is out of Bounds using beginIndex:608, endIndex:684 from line: ... See more...
2023-10-25 10:56:46,709 WARN pool-1-thread-1 com.veeva.bpr.batchrecordprint.scheduledTasks - Header Field Name: MOM_Caution_e_1, value is out of Bounds using beginIndex:608, endIndex:684 from line: 2023-10-25 10:56:46,709 WARN pool-1-thread-1 com.veeva.Kpr.batchrecordprint.scheduledTasks - 02000011831199QD06620   my requirement is encrypt the WARN pool-1-........................ record the source file   please help me the process
Thanks, is it possible to  if field src_sg_info does not exist then "No active VPN user" in the same timechart. 
getting error  Error in 'search' command: Unable to parse the search: unbalanced parentheses using below search  index=_internal sourcetype=splunkd source="/opt/splunk/var/log/splunk/metrics.lo... See more...
getting error  Error in 'search' command: Unable to parse the search: unbalanced parentheses using below search  index=_internal sourcetype=splunkd source="/opt/splunk/var/log/splunk/metrics.log" group=tcpin_connections [ search index="INDEX1" \ (puppet-agent OR puppet)) AND *Error* AND "/Stage[" | rename host AS hostname | fields hostname ] | table hostname sourceIp | dedup hostname    
Use single quotes around field names when on the right of an assignment | makeresults count=3 | streamstats count | eval a.1 = case(count=1, 1, count=2, null(), count=3, "null") | eval a.2 = case(co... See more...
Use single quotes around field names when on the right of an assignment | makeresults count=3 | streamstats count | eval a.1 = case(count=1, 1, count=2, null(), count=3, "null") | eval a.2 = case(count=1, "null", count=2, 2, count=3, null()) | eval a3 = case(count=1, null(), count=2, "null", count=3, 3) | table a* | foreach mode=multifield * [eval <<FIELD>>=if('<<FIELD>>'="null",null(),'<<FIELD>>')]
I am basically faced with this problem:     | makeresults count=3 | streamstats count | eval a.1 = case(count=1, 1, count=2, null(), count=3, "null") | eval a.2 = case(count=1, "null", count=2, 2,... See more...
I am basically faced with this problem:     | makeresults count=3 | streamstats count | eval a.1 = case(count=1, 1, count=2, null(), count=3, "null") | eval a.2 = case(count=1, "null", count=2, 2, count=3, null()) | eval a3 = case(count=1, null(), count=2, "null", count=3, 3) | table a* | foreach mode=multifield * [eval <<FIELD>>=if(<<FIELD>>="null",null(),'<<FIELD>>')]     I have fields that contain a `.`. This screws up the `foreach` command. Is there a way to work around this? I have tried using `"<<FIELD>>"` but to no avail.  I think it would work to rename all "bad" names, loop trough them and rename them back, but if possible I would like to avoid doing this.      
  index=asa host=1.2.3.4 | fillnull src_sg_info value="No active VPN user" | timechart span=10m dc(src_sg_info) by src_sg_info | rename user1 as "David E"  
Please how can I change from the default NT Authority to Local System account as the service Logon . I am trying to do this with Ansible 
Hi, Not sure how to fix it. Hope someone can give me a hint.  The code looks like index=asa host=1.2.3.4 src_sg_info=* | timchart span=10m dc(src_sg_info) by src_sg_info | rename user1 as "David ... See more...
Hi, Not sure how to fix it. Hope someone can give me a hint.  The code looks like index=asa host=1.2.3.4 src_sg_info=* | timchart span=10m dc(src_sg_info) by src_sg_info | rename user1 as "David E"   This splunk code will give a list with active/logged on VPN user.  So far so good. So my question is following: howto  include empty src_sg_info into the same timechart and mark it as "No active VPN user"
I have a  simple report that that has Hebrew in it. Exporting to CSV works as it should and I see the Hebrew in it, but exporting to PDF shows nothing.   | makeresults | eval title = "לא רואים אות... See more...
I have a  simple report that that has Hebrew in it. Exporting to CSV works as it should and I see the Hebrew in it, but exporting to PDF shows nothing.   | makeresults | eval title = "לא רואים אותי"   Can someone help? Thanks