All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, I have distrubuted setup.  Each site has 1 indexer /1 search head & 1 master server. They are part of cluster. One of the indexer from other site was down for more than 48 hours. Now indexer ... See more...
Hi, I have distrubuted setup.  Each site has 1 indexer /1 search head & 1 master server. They are part of cluster. One of the indexer from other site was down for more than 48 hours. Now indexer is up . On master server i can see data durability as red. How do i fix the issue now ? Regards, Nilupat
One of the universal forwarder is connected to another team deployment server, so we need to connect to our deployment server now and we don't want to use the another team DS.  I am not able to see ... See more...
One of the universal forwarder is connected to another team deployment server, so we need to connect to our deployment server now and we don't want to use the another team DS.  I am not able to see any configuration related to that in deploymentclient.conf  What is the proper sintax for use this?
Getting this issue on a windows server. There's only an inputs.conf file with the following  [monitor://L:\Logs\ApplicationLogs*.log] sourcetype = xxx index = yy disabled = 0   1/12/2022 ... See more...
Getting this issue on a windows server. There's only an inputs.conf file with the following  [monitor://L:\Logs\ApplicationLogs*.log] sourcetype = xxx index = yy disabled = 0   1/12/2022 05:03.3 1000 ERROR WatchedFile [7088 tailreader0] -   Bug during applyPendingMetadata, header processor does not own the indexed extractions confs.         1/12/2022 05:03.3 1000 ERROR TailReader [7088 tailreader0] - Ignoring path="L:\Logs\ApplicationLogs20220112VServer.log" due to:   Bug during applyPendingMetadata, header processor does not own the indexed extractions confs.          
Hi there, I would like to monitor indexes that have not been active for more than 24hrs+ and display the names of the indexes in a table as well as the last received activity. Thanks
Hello Splunkers!  I am trying to find a way to set up a cron schedule on DB connect app?  I want to run the schedule every second week of Tuesday of the month.  So, the next schedule should be 02/... See more...
Hello Splunkers!  I am trying to find a way to set up a cron schedule on DB connect app?  I want to run the schedule every second week of Tuesday of the month.  So, the next schedule should be 02/08/2022, then 03/08/2022, then 04/05/2022 .... We tried super hard to make this work.  Below is what we tried:   I thought this would work then, it shows the next schedule will be on 1/18 not next month.    Any brilliant ideas our Splunkers have??   Thanks in advanced. 
Is there a way to make a call python script from a dashboard and display the received output data from the script into the dashboard. In reality I am not looking into diggesting this data into splunk... See more...
Is there a way to make a call python script from a dashboard and display the received output data from the script into the dashboard. In reality I am not looking into diggesting this data into splunk but only display on demand when looking at the dashboard.  
https://docs.splunk.com/Documentation/Splunk/latest/admin/savedsearchesconf mentions two lookup-generating actions: action.lookup and action.populate_lookup. Some of the differences are clear, tho... See more...
https://docs.splunk.com/Documentation/Splunk/latest/admin/savedsearchesconf mentions two lookup-generating actions: action.lookup and action.populate_lookup. Some of the differences are clear, though not explicitly listed, in the docs. What's the complete set of differences? When should I use one or the other and when do I have to use outputlookup? action.lookup = <boolean> * Specifies whether the lookup action is enabled for this search. * Default: false action.lookup.filename = <lookup filename> action.lookup.append = <boolean> and action.populate_lookup = <boolean> * Specifies whether the lookup population action is enabled for this search. * Default: false action.populate_lookup.dest = <string> run_on_startup = <boolean> run_n_times = <unsigned integer>   
Hi community , I am trying to filter out some undesired traffic from a particular index. I read about the option using props.conf and transforms.conf.  The query matching the traffic that I don't w... See more...
Hi community , I am trying to filter out some undesired traffic from a particular index. I read about the option using props.conf and transforms.conf.  The query matching the traffic that I don't want, looks like this:  index=abc sourcetype=abc_traffic dest_ip=255.255.255.255 The index abc is located in the search App.  So, I went to my Search Head -> opt/splunk/etc/apps/search/local and modified the props.conf with the following:  [abc_traffic] TRANSFORMS-null= broadcast-null Then, I created a TRANSFORMS.conf file in the same directory with the following entry:  [broadcast-null] REGEX= dest_ip= 255.255.255.255 DEST_KEY= queue FORMAT= nullQueue Restarted splunk I am not sure if I am doing something wrong, maybe I am using the wrong location or format, not sure, I don't have too much experience managing Splunk.  Appreciated any help! 
I am getting performance errors on the ES reg. many indexes used by users, specially the admin role. Any SPLs or direction is much appreciated.
I'm trying to have Splunk submit two separate events in one run .   def run(): logging.info("Running Test....") now = time.time() output = f""" <stream> <event> <time>{now}</time> <data>event_stat... See more...
I'm trying to have Splunk submit two separate events in one run .   def run(): logging.info("Running Test....") now = time.time() output = f""" <stream> <event> <time>{now}</time> <data>event_status="(0)Item0."</data> </event> <event> <time>{now}</time> <data>event_status="(1)Item1."</data> </event> </stream> """ print(output) sys.stdout.flush()     This runs and the XML is submitted but it only shows as a single event     1/12/22 9:47:54.000 PM <stream> <event> <time>1642024074.8583786</time> <data>event_status="(0)Item0."</data> </event> <event> <time>1642024074.8583786</time> <data>event_status="(1)Item1."</data> </event> </stream>     Is there any way to submit these two events so they show up as separate events?  I'm looking at polling multiple statistics for a multitenant application and would like to display each tenant separately.
I've been able to run a dashboard from the command line by  1. copy and pasting simple xml into a file 2. updating tokens with desired values 3. running the pdf render command from curl, e.g., cur... See more...
I've been able to run a dashboard from the command line by  1. copy and pasting simple xml into a file 2. updating tokens with desired values 3. running the pdf render command from curl, e.g., curl  -sku guest:pwd "https://splunkhost:8089/services/pdfgen/render" --data-urlencode "input-dashboard-xml=$(cat sample-dashboard.xml)" -d namespace=search -d paper-size=a4-landscape > mydash.pdf Is there a way to use python/rest to do the same? I tried some of the endpoints and it creates the dashboard xml with extra scaffolding. Seems to be intended for adding and updating dashboards rather than running the dashboard itself.
I have SCK setup, and collect my Kubernetes metrics. We have access out of the box to the node memory limit kube.node.memory.allocatable (in MB), and to the memory usage kube.node.memory.working_set_... See more...
I have SCK setup, and collect my Kubernetes metrics. We have access out of the box to the node memory limit kube.node.memory.allocatable (in MB), and to the memory usage kube.node.memory.working_set_bytes (in bytes) but we want to do some calculations to get the memory usage percentage per node.
Hello! I have a really simple unix based shell script that returns info about the httpd (Apache) service.  The script is encapsulated in an input, so the printf statement becomes the event.  Each ev... See more...
Hello! I have a really simple unix based shell script that returns info about the httpd (Apache) service.  The script is encapsulated in an input, so the printf statement becomes the event.  Each event is one line only. Here is an indexed event coming from the UF (with highlights that I will explain successively): For some reason the sourcetype is not working since _time is not what I specify, rather it is half from the field I want (timestamp in green) and half some text in the payload that i do not want (date in red). The sourcetype is currently this (it has gone through many evolutions): [linux:httpdinfo] SHOULD_LINEMERGE = false KV_MODE = auto MAX_TIMESTAMP_LOOKAHEAD = 30 TIME_FORMAT = %Y-%m-%d %H:%M:%S %z No matter what I try I cannot seem to get it to work. Could somebody give me a push in the right direction? Thanks! Andrew
recently we onboarded these logs but most of the fields are not extracted though these values are mentioned with =. I am trying to extract batch_id , tran_id and pricing hashcode and rules hashcode. ... See more...
recently we onboarded these logs but most of the fields are not extracted though these values are mentioned with =. I am trying to extract batch_id , tran_id and pricing hashcode and rules hashcode.  I tried to extract from GUI but i am seeing lot of mismatches. can anyone help me with this. here are sample logs {"logGroup": "ldcs-devl-eb-06-webapp-Application", "logStream": "ip-10-108-18-243 (i-004009051755596bb) - ld-pricing.log", "aws_acctid": "189693026861", "aws_region": "us-east-1", "splunkdata": {"shard_id": "000000000020", "splkhf": "spitsi-acpt-log-heavy-4", "rvt": 1642014308933}, "lifecycle": "devl-shared", "aws_appshortname": "ldcs", "appcode": "FVV", "cwmessage": "2022-01-12 14:05:02.322|[DefaultThreadPool-18] LD-PRICING-INFO c.f.l.pricing.mapper.DealSetsMapper STARTOFFIELDS|component=LD-PRICING|user_id=c9273wne|seller_id=165700007|session_id=D86C9BAF3F308C7838E4A52BC0DA0938.LDNG-UI-cl02|tran_id=9a6e8ba3-2c01-4b18-bbfb-88a854bbdb85|batch_id=9a6e8ba3-2c01-4b18-bbfb-88a854bbdb85|dealset_id=116784|execution_type=WholeLoan|loan_count=1|time=|messageId=ID:SOADevl-ems08.752D61D05D2DBE2E02:414|ENDOFFIELDS - Pricing Info ~ Pricing Hashcode: 1761264532 - Rules Hashcode: -1500207091 - uniqueClientDealIdentifier: a37801e4-dbe6-4c3a-bc26-17d1a78a0b28 - sellerLoanIdentifier: BTP22_0111_B10 - poolIdentifier: null - investorCommitmentIdentifier: 116784 - sellerId: 165700007 ", "cwtimestamp": 1641996302000} {"logGroup": "ldcs-devl-eb-06-webapp-Application", "logStream": "ip-10-108-18-243 (i-004009051755596bb) - ld-pricing.log", "aws_acctid": "189693026861", "aws_region": "us-east-1", "splunkdata": {"shard_id": "000000000020", "splkhf": "spitsi-acpt-log-heavy-4", "rvt": 1642014334358}, "lifecycle": "devl-shared", "aws_appshortname": "ldcs", "appcode": "FVV", "cwmessage": "2022-01-12 14:05:27.035|[DefaultThreadPool-20] LD-PRICING-INFO c.f.l.pricing.mapper.DealSetsMapper STARTOFFIELDS|component=LD-PRICING|user_id=c9273wne|seller_id=165700007|session_id=D86C9BAF3F308C7838E4A52BC0DA0938.LDNG-UI-cl02|tran_id=751b1112-0511-4dbd-b94c-a6409c23b20d|batch_id=751b1112-0511-4dbd-b94c-a6409c23b20d|dealset_id=116784|execution_type=WholeLoan|loan_count=1|time=|messageId=ID:SOADevl-ems08.752D61D05D2DBE2E0A:457|ENDOFFIELDS - Pricing Info ~ Pricing Hashcode: 1761264532 - Rules Hashcode: -1500207091 - uniqueClientDealIdentifier: a37801e4-dbe6-4c3a-bc26-17d1a78a0b28 - sellerLoanIdentifier: BTP22_0111_B10 - poolIdentifier: null - investorCommitmentIdentifier: 116784 - sellerId: 165700007 ", "cwtimestamp": 1641996327000}
Have a few Windows server that I need to enable file monitoring on to be sending logs to Splunk Ent. server. I could use your hands-on experience please, including any SPLs, do's & don't. I found thi... See more...
Have a few Windows server that I need to enable file monitoring on to be sending logs to Splunk Ent. server. I could use your hands-on experience please, including any SPLs, do's & don't. I found this link but is too generic. https://docs.splunk.com/Documentation/Splunk/8.2.4/Data/MonitorfilesystemchangesonWindows Thanks a million in advance. Have a great week ahead.
I am trying to configure a LB for a Splunk SHC, but I am unable to find any resources on how to do it. It is my first time that I have ever messed with any load-balancers, so I need step-by-step inst... See more...
I am trying to configure a LB for a Splunk SHC, but I am unable to find any resources on how to do it. It is my first time that I have ever messed with any load-balancers, so I need step-by-step instructions
Helloo, i am using the MLTK, i get this error Error in 'fit' command: (ImportError) DLL load failed while importing _arpack: The specified procedure could not be found. It will be great if the tea... See more...
Helloo, i am using the MLTK, i get this error Error in 'fit' command: (ImportError) DLL load failed while importing _arpack: The specified procedure could not be found. It will be great if the team can help me to solve this.  
Hi All, I am writing a playbook that  sends an automated email when a case is opened in phantom.   I know If you are doing a manual promotion (via GUI), then you would need to have a REST query get e... See more...
Hi All, I am writing a playbook that  sends an automated email when a case is opened in phantom.   I know If you are doing a manual promotion (via GUI), then you would need to have a REST query get executed from a playbook hitting the container endpoint and looking for "container_type": "case".  Then you would just have a format block to populate the REST results and have a connected send email action via SMTP.  what are the steps to get a REST query get executed?   Brandy
I'm trying to get a new sourcetype (NetApp user-level audit logs, exported as XML) to work, and I think my fields.conf tokenizer is breaking things. But I'm not really sure how, or why, or what to do... See more...
I'm trying to get a new sourcetype (NetApp user-level audit logs, exported as XML) to work, and I think my fields.conf tokenizer is breaking things. But I'm not really sure how, or why, or what to do about it. The raw data is XML, but I'm not using KV_MODE=xml because that doesn't properly handle all the attributes. So, I've got a bunch of custom regular expressions, the true backbone of all enterprise software. Here's a single sample event (but you can probably disregard most of it, it's just here for completeness): <Event><System><Provider Name="NetApp-Security-Auditing" Guid="{guid-edited}"/><EventID>4656</EventID><EventName>Open Object</EventName><Version>101.3</Version><Source>CIFS</Source><Level>0</Level><Opcode>0</Opcode><Keywords>0x8020000000000000</Keywords><Result>Audit Success</Result><TimeCreated SystemTime="2022-01-12T15:42:41.096809000Z"/><Correlation/><Channel>Security</Channel><Computer>server-name-edited</Computer><ComputerUUID>guid-edited</ComputerUUID><Security/></System><EventData><Data Name="SubjectIP" IPVersion="4">1.2.3.4</Data><Data Name="SubjectUnix" Uid="1234" Gid="1234" Local="false"></Data><Data Name="SubjectUserSid">S-1-5-21-3579272529-1234567890-2280984729-123456</Data><Data Name="SubjectUserIsLocal">false</Data><Data Name="SubjectDomainName">ACCOUNTS</Data><Data Name="SubjectUserName">davidsmith</Data><Data Name="ObjectServer">Security</Data><Data Name="ObjectType">Directory</Data><Data Name="HandleID">00000000000444;00;002a62a7;0d3d88a4</Data><Data Name="ObjectName">(Shares);/LogTestActivity/dsmith/wordpress-shared/plugins-shared</Data><Data Name="AccessList">%%4416 %%4423 </Data><Data Name="AccessMask">81</Data><Data Name="DesiredAccess">Read Data; List Directory; Read Attributes; </Data><Data Name="Attributes">Open a directory; </Data></EventData></Event> My custom app's props.conf has a couple dozen lines like this, for each element I want to be able to search on: EXTRACT-DesiredAccess = <Data Name="DesiredAccess">(?<DesiredAccess>.*?)<\/Data> EXTRACT-HandleID = <Data Name="HandleID">(?<HandleID>.*?)<\/Data> EXTRACT-InformationRequested = <Data Name="InformationRequested">(?<InformationRequested>.*?)<\/Data> This works as you'd expect, except for a couple of fields where they're composites. This is most noticeable in the DesiredAccess element, which in our example looks like: <Data Name="DesiredAccess">Read Data; List Directory; Read Attributes; </Data> Thus you get a single field with "Read Data; List Directory; Read Attributes; " and if you only need to look for, say, "List Directory," you have to get clever with your searches. So, I added a fields.conf file with this in it: [DesiredAccess] TOKENIZER = \s?(.*?); When I paste the 'raw' contents of that field, and that regex, into a tool like regex101.com, it works and returns the expected results. Similarly, it also works if I remove it from fields.conf, and put it in as a makemv command: index=nonprod_pe | makemv tokenizer="\s?(.*?);" DesiredAccess With the TOKENIZER element in fields.conf, the DesiredAccess attribute just doesn't populate, period. So I assume it's the problem. (Since this is in an app, the app's metadata does contain explicit "export = system" lines for both [props] and [fields]. And the app is on indexers and search heads. Probably doesn't need to be in both places, but hey I'm still learning...) So, what am I doing wrong with my fields.conf tokenizer, that's caused it to fail completely to identify any elements?
Hello, I am trying to integrate Automic Automation Intelligence tool  with Splunk. https://www.broadcom.com/products/software/automation/automic-automation-intelligence Basically this tool is read... See more...
Hello, I am trying to integrate Automic Automation Intelligence tool  with Splunk. https://www.broadcom.com/products/software/automation/automic-automation-intelligence Basically this tool is reads Autosys database to download the job run history like success, failures etc. Want to integrate with splunk so that I can create a dashboard with the job run history. The job run history should be fetched real time. Is there any way to do that? Any API calls or any other way? Please suggest.