All Topics

Top

All Topics

I have a source file with comma separated fields, I have to create a table where I need combine and show statistics of different products. Data file fields - TeamName,EmploymentType,Skills,TrainingC... See more...
I have a source file with comma separated fields, I have to create a table where I need combine and show statistics of different products. Data file fields - TeamName,EmploymentType,Skills,TrainingCompleted Source File data: TeamA,Contract,Java,Yes TeamA,Contract,DotNet,No TeamA,Contract,C++,Yes TeamA,Contract,ReactJS,No TeamB,Permanent,Java,Yes TeamB,Permanent,DotNet,No TeamB,Permanent,C++,Yes TeamB,Permanent,ReactJS,No TeamC,Contract,Java,Yes TeamC,Contract,DotNet,No TeamC,Contract,C++,Yes TeamC,Contract,ReactJS,No TeamD,Permanent,Java,Yes TeamD,Permanent,DotNet,No TeamD,Permanent,C,Yes TeamD,Permanent,ReactJS,No TeamE,Contract,Java,Yes TeamE,Contract,DotNet,No TeamE,Contract,Java,Yes Now the requirement is to create a table view of source file with below columns: TeamName EmploymentType Skills TrainingCompleted Team Appearance Training Completion% TeamA Contract Java,DotNet,ReactJS,C++ 2 4 50% TeamB Permanent Java,DotNet,ReactJS,C++ 2 4 50% TeamC Contract Java,DotNet,ReactJS,C++ 2 4 50% TeamD Permanent Java,DotNet,ReactJS,C 2 4 50% TeamE Contract Java,Dotnet 2 3 67%   Please give me the exact query. I am beginner in Splunk. 
Hi Team Getting this error message frequently in internal logs of Splunk. Error in 'where' command: The expression is malformed. An unexpected character is reached at '*) OR match(indicator, *_... See more...
Hi Team Getting this error message frequently in internal logs of Splunk. Error in 'where' command: The expression is malformed. An unexpected character is reached at '*) OR match(indicator, *_ip) OR match(indicator, *_host)) Any hints will be appreciated. Thanks in advance  
Hi at all, I found that the information in my Monitoring Console (Splunk version 9.1.1) about Replication Factor is wrong, is there anyone that experienced the same thing: in [Monitoring Console > ... See more...
Hi at all, I found that the information in my Monitoring Console (Splunk version 9.1.1) about Replication Factor is wrong, is there anyone that experienced the same thing: in [Monitoring Console > Overview of Splunk Enterprise 9.1.1] is displayed Replication Factor = 3 but I configured Replication Factor =2 (it's a multisite, so origin=1, total=2). Is it maybe the Search Head Cluster Replication Factor (that's 3) or simply a displ? Thank you for your advice. Ciao. Giuseppe
Hi Team, We are planning to integrate Splunk with the ticketing system BMC Helix. So, we would like to know if it's possible to raise a ticket in the BMC Helix automatically if the knowledge objects... See more...
Hi Team, We are planning to integrate Splunk with the ticketing system BMC Helix. So, we would like to know if it's possible to raise a ticket in the BMC Helix automatically if the knowledge objects are triggered? If yes what is the way to do that.   Thanks in Advance Siva.  
I have a log feed which was configured by a previous employee. Documentation does not exist, of course... The feed stopped once we migrated indexers. I checked the deployment server and there does n... See more...
I have a log feed which was configured by a previous employee. Documentation does not exist, of course... The feed stopped once we migrated indexers. I checked the deployment server and there does not seem to be any apps on this server where the feed exists which ingest this feed. Then, we added in the old indexer to the architecture and the feed started working again! When I check these events in Search & Reporting, I can see the feed is only coming in via this legacy indexer by checking the splunk_server field. I logged into the legacy indexer via CLI and used btool on inputs for the index, source and sourcetypes, no matches. Also struggling to find anything useful in the _internal events via search head GUI. Both indexers are in a cluster and so the config should be identical, but the events only come in via the legacy indexer. How can I find how this feed is configured?
I am trying to integrate this solution into Splunk but I am finding problems. The most relevant as far is the number of retrieved events. I use the official event collector app Radware CWAF Event Co... See more...
I am trying to integrate this solution into Splunk but I am finding problems. The most relevant as far is the number of retrieved events. I use the official event collector app Radware CWAF Event Collector | Splunkbase It works via API user and I found that the number of events in Splunk doesn't match with the events in cloud console. After opening a support ticket with Radware they told me that the problem is the "pulling rate" for API configuration in my Splunk. I have been trying to find how to configure this "pulling rate" in Splunk but I found nothing. Do you know how to solve this parameter or how do you solve this integration? This is exactly what they told me: Our cloudops team checked to see if there are any differences between the CWAF and the logs that are sent to your SIEM. They found that there is a queue of logs that are waiting to be pulled by your SIEM. Therefore, we do not have any evidence that the issue is with the SIEM infrastructure. For example, if we send 10 events per minute and the SIEM is pulling 5 per minute, this will create a queue of logs. Unfortunately, we cannot support customer-side configuration. It might be more helpful to consult with the support team for the SIEM you are using, as the interval might be the "Pulling Rate."  
if we configure a fast volume for hot/warm and slower spindles for cold and set maxVolumeDataSizeMB to enforce sizes. can you see any situation where cold would file but hot/warm would still have sp... See more...
if we configure a fast volume for hot/warm and slower spindles for cold and set maxVolumeDataSizeMB to enforce sizes. can you see any situation where cold would file but hot/warm would still have space?
I have a client creating a new system that will have 2 sites with rf/sf per site being 1 and the total rf/sf being 2. Each site will have 3+ indexers. My question is how this effects bucket distribu... See more...
I have a client creating a new system that will have 2 sites with rf/sf per site being 1 and the total rf/sf being 2. Each site will have 3+ indexers. My question is how this effects bucket distribution over the indexers if you leave maxbucket as the default of 3 and is there any performance implications They're going rf/sf as 1 due to disk costs. Thanks in advance
I am basically faced with this problem:     | makeresults count=3 | streamstats count | eval a.1 = case(count=1, 1, count=2, null(), count=3, "null") | eval a.2 = case(count=1, "null", count=2, 2,... See more...
I am basically faced with this problem:     | makeresults count=3 | streamstats count | eval a.1 = case(count=1, 1, count=2, null(), count=3, "null") | eval a.2 = case(count=1, "null", count=2, 2, count=3, null()) | eval a3 = case(count=1, null(), count=2, "null", count=3, 3) | table a* | foreach mode=multifield * [eval <<FIELD>>=if(<<FIELD>>="null",null(),'<<FIELD>>')]     I have fields that contain a `.`. This screws up the `foreach` command. Is there a way to work around this? I have tried using `"<<FIELD>>"` but to no avail.  I think it would work to rename all "bad" names, loop trough them and rename them back, but if possible I would like to avoid doing this.      
Hi, Not sure how to fix it. Hope someone can give me a hint.  The code looks like index=asa host=1.2.3.4 src_sg_info=* | timchart span=10m dc(src_sg_info) by src_sg_info | rename user1 as "David ... See more...
Hi, Not sure how to fix it. Hope someone can give me a hint.  The code looks like index=asa host=1.2.3.4 src_sg_info=* | timchart span=10m dc(src_sg_info) by src_sg_info | rename user1 as "David E"   This splunk code will give a list with active/logged on VPN user.  So far so good. So my question is following: howto  include empty src_sg_info into the same timechart and mark it as "No active VPN user"
I have a  simple report that that has Hebrew in it. Exporting to CSV works as it should and I see the Hebrew in it, but exporting to PDF shows nothing.   | makeresults | eval title = "לא רואים אות... See more...
I have a  simple report that that has Hebrew in it. Exporting to CSV works as it should and I see the Hebrew in it, but exporting to PDF shows nothing.   | makeresults | eval title = "לא רואים אותי"   Can someone help? Thanks  
Hi, I'm trying Splunk SOAR Community Edition, and I'm having an issue with the Elasticsearch app. I'm attempting to configure the asset with my Elasticsearch instance. The test connectivity is good... See more...
Hi, I'm trying Splunk SOAR Community Edition, and I'm having an issue with the Elasticsearch app. I'm attempting to configure the asset with my Elasticsearch instance. The test connectivity is good, but I can't poll incidents with "poll now." I encounter this type of error: Starting ingestion... If an ingestion is already in progress, this request will be queued and completed after that request completes. App 'Elasticsearch' started successfully (id: 1699519715123) on asset: 'elastic'(id: 4) Loaded action execution configuration Quering data for soar index Successfully added containers: 0, Successfully added artifacts: 0 1 action failed Unable to load query json. Error: Error Message: Expecting value: line 1 column 1 (char 0) However, when I use an action in a playbook with the command "run query," I can see data. Has anyone ever encountered this error ?
I have a Distributed Splunk environment and My Dashboard is linked to Services, and I made few changes in KPI Base Search and KPI Title, but it is not reflecting in the Dashboard, please suggest what... See more...
I have a Distributed Splunk environment and My Dashboard is linked to Services, and I made few changes in KPI Base Search and KPI Title, but it is not reflecting in the Dashboard, please suggest what need to be done.
I noticed that the dashboards in Splunk 9.1.0 open in new tab instead of the same tab. This wasn't the case in the previous versions of Splunk. Anyone knows why this change is added and how to make t... See more...
I noticed that the dashboards in Splunk 9.1.0 open in new tab instead of the same tab. This wasn't the case in the previous versions of Splunk. Anyone knows why this change is added and how to make the dashboards open in same tab using conf file changes? Any help is much appreciated. Thanks
Is there any way to disable the dashboard studio and classic dashboard help cards under dashboards tab through conf file changes?
Please help comment on below issue  Bug description: Option limit is not processed correctly for phantom.collect2 in phantom version 6.1.0 Reproduce in lab: testb = phantom.collect2(container=con... See more...
Please help comment on below issue  Bug description: Option limit is not processed correctly for phantom.collect2 in phantom version 6.1.0 Reproduce in lab: testb = phantom.collect2(container=container,tags=["test"], datapath=['artifact:*.name'],limit=0) phantom.debug(len(testb))   There are more than 6000 artifacts in test container However, phantom.collect2 can only return 1999 results even though we set limit=0 which means no limit   Nov 09, 11:19:01 : phantom.collect2(): called for datapath['artifact:*.name'], scope: None and filter_artifacts: None Nov 09, 11:19:01 : phantom.get_artifacts() called for label: * Nov 09, 11:19:01 : phantom.collect(): called with datapath: artifact:* / <class 'str'>, limit = 2000, scope=all, filter_artifact_ids=[] and none_if_first=False with trace:False Nov 09, 11:19:01 : phantom.collect(): calling out to collect_from_container Nov 09, 11:19:01 : phantom.collect(): called with datapath 'artifact:*', scope='all' and limit=2000. Found 2000 TOTAL artifacts Nov 09, 11:19:01 : phantom.collect2(): Classified datapaths as [<DatapathClassification.ARTIFACT: 1>] Nov 09, 11:19:01 : phantom.collect(): called with datapath as LIST of paths, scope='all' and limit=0. Found 1999 TOTAL artifacts Nov 09, 11:19:01 : 1999            
After upgrading a distributed Splunk Enterprise environment from 9.0.5 to 9.1.1 a lot of issues observed. The most pressing one was the unexpected wiping of all input.conf and output.conf files from ... See more...
After upgrading a distributed Splunk Enterprise environment from 9.0.5 to 9.1.1 a lot of issues observed. The most pressing one was the unexpected wiping of all input.conf and output.conf files from heavy forwarders. All configuration files are still present and intact on the deployment server, though after unpacking the updated version and bringing Splunk back up on the heavy forwarders, all input/output files were wiped from all apps and are not being fetched from the deployment server. So none of them were listening for incoming traffic or forwarding to indexers. Based on previous experience, there is no way to "force push" configuration from the deployment server when all instances are "happy", which means manual inspection and repair of all affected apps. So now I am curious as to why this happened? If there was something wrong with the configuration I'd expect there to be some errors thrown and not just having the entire files deleted. Any input regarding why this happend, how to find out would be appreciated. UPDATE: So by now it is very clear what happened, a bunch of default folder were simply deleted afterduring the update, there are a few indications of this in different log files. 11-08-2023 12:21:19.816 +0100 INFO AuditLogger - Audit:[timestamp=11-08-2023 12:21:19.816, user=n/a, action=delete-parent,path="/opt/splunk/etc/apps/<appname>/default/inputs.conf" This was unfortunate as the deploymentclient.conf file was stored in <appname>/default and got erased together with almost all input/output.conf and a bunch of other things stored in the default folder. I don't get the impression that this is expected behaviour, so now I am curious regarding the cause of this highly strange outcome.
I am trying to write a rex command that extracts the field "registrar" from the below four event examples. The below values in bold are what i am looking for to be the value for "registrar".  I am us... See more...
I am trying to write a rex command that extracts the field "registrar" from the below four event examples. The below values in bold are what i am looking for to be the value for "registrar".  I am using the following regex to extract the field and values, but i seem to be capturing the \r\n after the bold values as well.  How can i modify my regex to capture just the company name in bold leading up to \r\n Registrar IANA Current regex being used:   Registrar:\s(?<registrar>.*?) Registrar IANA   Expiry Date: 2026-12-09T15:18:58Z\r\n Registrar: ABC Holdings, Inc.\r\n Registrar IANA ID: 972 Expiry Date: 2026-12-09T15:18:58Z\r\n Registrar: Gamer.com, LLC\r\n Registrar IANA ID: 837 Expiry Date: 2026-12-09T15:18:59Z\r\n Registrar: NoCo MFR Ltd.\r\n Registrar IANA ID: 756 Expiry Date: 2026-12-09T15:18:59Z\r\n Registrar: Onetrust Group, INC\r\n Registrar IANA ID: 478
Data Ingest and Search are core Splunk Cloud Platform capabilities that customers rely on. However, customers primarily in the financial, public and healthcare sectors are increasingly concerned abou... See more...
Data Ingest and Search are core Splunk Cloud Platform capabilities that customers rely on. However, customers primarily in the financial, public and healthcare sectors are increasingly concerned about traversing their data workloads over the public Internet. To support these requirements, we introduced AWS PrivateLink support on the Splunk Cloud Platform. Private Connectivity helps insulate data exchange channels between a customer’s AWS cloud environment and Splunk Cloud Platform from the public Internet. Since Oct '22, compliance customers with an AWS presence can send their ingest data (Forwarder and HEC traffic) to their Splunk Cloud Platform stack over private endpoints, without exposing it over the public internet.  Expanding on this foundational capability, I am excited to announce that starting today customers with compliance subscriptions, such as PCI, HIPAA, IRAP, and FedRAMP moderate, will be able to perform their core search and access APIs flowing through the Search endpoints via AWS Privatelink.   Powered by Splunk Cloud Platforms's Admin Config Service APIs, onboarding private connectivity (both for search and data ingest) on your stack is completely self-serviceable. You can learn more about the functionality and evaluate if it is the right choice for you by reviewing the private connectivity Overview and the Getting started guide.     *Customers are responsible for AWS data transfer costs associated with their VPC. For more info, refer to AWS PrivateLink pricing. AWS is a trademark of Amazon.com, Inc. or its affiliates.