All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello,   We're trying to setup the Azure Storage account from the add-on and we're getting authentication failure. Credential is working fine when test using Azure Storage Explorer. Only difference... See more...
Hello,   We're trying to setup the Azure Storage account from the add-on and we're getting authentication failure. Credential is working fine when test using Azure Storage Explorer. Only difference is client environment has certain policy. It started to work when they add the respective VNET/Subnet in storage firewall while failing with authentication error over storage account private endpoint.   Does app can support storage account connectivity through specific storage services private endpoint? from configuration perspective, we can only define the storage account name and access key.   Thanks.
Hello all, In our environment, the UiPath team doesn't seem to know how to expect the export expecting in the default inputs.conf (C:\uipath_logs). Is there any documentation that might help them?
Hi Everyone How can I fit an analytical expression to a dataset in a dashboard? The expression could for instance be: y = a + b*exp(x-x_0) Let me know what you thing. Thanks
We have RSA SecurID addon installed on Syslog server which also is a HF. Can anyone share steps to upgrade the addon.  @splunk 
I am getting result 99.99% but it is getting round off to 100.00% , But I want to show 99.99% only.
How to convert table like this (2 rows per topic):   topic   mbean_property_name bytes A   BytesOutPerSec  60376267182 A   BytesInPerSec   12036381418 B   BytesInPerSec   6246693551 B   BytesO... See more...
How to convert table like this (2 rows per topic):   topic   mbean_property_name bytes A   BytesOutPerSec  60376267182 A   BytesInPerSec   12036381418 B   BytesInPerSec   6246693551 B   BytesOutPerSec  6237320887 to topic   BytesOutPerSec  BytesInPerSec A   60376267182 12036381418 B   6237320887 6246693551
If I have an index with a retention of 90 days. Can I make a rough estimate about the cost of increasing the retention of index=  index-name  extra 90 day?
Explain me construction structure of configuration file in splunk and what all component it contain and what we call them.  [what are imp configuration files in splunk, what is the purpose of these ... See more...
Explain me construction structure of configuration file in splunk and what all component it contain and what we call them.  [what are imp configuration files in splunk, what is the purpose of these diffenet files. If a file suppose inputs.conf is present in multiple apps then how splunk will consolidate it. what is the file precedency order. can i have my own configuration file name like my nameinputs.conf file, will it work and how.]
Hello,  There is this old system where we want to upgrade splunk to the newest version First we want to upgrade the forwarders on 3 test servers  The current version of splunk universal forwarder ... See more...
Hello,  There is this old system where we want to upgrade splunk to the newest version First we want to upgrade the forwarders on 3 test servers  The current version of splunk universal forwarder is 7.0.3.0 We want to rise it to the 9.21 Would that version works for the time being with Splunk Enterprise 7.3.1? I know it would be better first upgrade the enterprise, as best practice is to use indexers with versions that are the same or higher than forwarder versions. (but there is hesitation to upgrade indexers first, as it's used also for data from production) But would it be possible to do forwarders first?  Edit: Upgrade was succesfull  
If attr.error exist then Error will be attr.error. If attr.error not exist and attr.error.errmsg exist then Error would be attr.error.errmsg.  i have tried the below code. only one case works other c... See more...
If attr.error exist then Error will be attr.error. If attr.error not exist and attr.error.errmsg exist then Error would be attr.error.errmsg.  i have tried the below code. only one case works other case fails. please advise eval Error=case(NOT attr.error =="*", 'attr.error',NOT attr.error.errmsg =="*", 'attr.error.errmsg')  
Hello Splunk Community, I'm encountering challenges while converting multivalue fields to single value fields for effective visualization in a line chart. Here's the situation: Output : rwws01  rw... See more...
Hello Splunk Community, I'm encountering challenges while converting multivalue fields to single value fields for effective visualization in a line chart. Here's the situation: Output : rwws01  rwmini01 ds_file_path rwws01 rwmini01 \\swmfs\orca_db_january_2024\topo\raster.ds 0.56 0.98 0.99 5.99 9.04 8.05 5.09 5.66 7.99 8.99   In this output chart table, the fields rwws01 and rwmini01 are dynamic, so hardcoding them isn't feasible. The current output format is causing challenges in visualizing the data into a line chart. My requirement is get output  : ds_file_path rwws01 rwmini01 \\swmfs\orca_db_january_2024\topo\raster.ds 0.98 5.99 \\swmfs\orca_db_january_2024\topo\raster.ds   0.99 3.56 \\swmfs\orca_db_january_2024\topo\raster.ds   0.56 4.78 \\swmfs\orca_db_january_2024\topo\raster.ds NULL (or 0) 9.08 \\swmfs\orca_db_january_2024\topo\raster.ds NULL( or 0) 2.98 \\swmfs\orca_db_january_2024\topo\raster.ds NULL (or 0) 5.88   I tried different commands and function, but nothing gave me the desired output, I'm seeking suggestions on how to achieve this single value field format or alternative functions and commands to achieve this output and create a line chart effectively. Your insights and guidance would be greatly appreciated! Thank you.
Greetings fellow Splunkers, My App was archived by the time I got around to updating the content, passing the AppInspect check, and uploading the Apps new release. After that, received an email fro... See more...
Greetings fellow Splunkers, My App was archived by the time I got around to updating the content, passing the AppInspect check, and uploading the Apps new release. After that, received an email from the AppInspect email group at Splunk stating the App passed, and has qualified for Splunk Cloud compatibility. It has now been 3 weeks since the App passed, but it has not been un-archived / reinstated. Found some instructions that state to click the "Reinstate App" button under your App profile's "Manage App", but do not see that button available. Can anyone post how to get an App unarchived / reinstated.        
splunkd.pid file is completely missing from cd /opt/splunkforwarder/var/run/splunk path, kindly suggest how can this be reoslved.
In the below query if c= I,  the reg expression is | rex field=attr.namespace "(?<DB>[^\.]*)" if c= other than "I" then rex would be | rex field=attr.ns "(?<DB>[^\.]*)"   index="aaa" (source="/tes... See more...
In the below query if c= I,  the reg expression is | rex field=attr.namespace "(?<DB>[^\.]*)" if c= other than "I" then rex would be | rex field=attr.ns "(?<DB>[^\.]*)"   index="aaa" (source="/test/log/testing.log") host IN(host1) c=N | rex field=attr.ns "(?<DB>[^\.]*)" | table DB| dedup DB  how can i adjust the query?
Hello Splunkers! I want to change the time picker of this dashboard in Enterprise security to provide the count of notables not over the last 24 hours, but over 12 hours.  I tried changing va... See more...
Hello Splunkers! I want to change the time picker of this dashboard in Enterprise security to provide the count of notables not over the last 24 hours, but over 12 hours.  I tried changing values related to time in the source code via GUI: It does not work, for some reason, the changes are not being saved, even though I am hitting the save button. Is there a way to add a time picker for this dashboard, so that we can select our interested time period at any time, and update the dashboard instantly? Thanks in advance for taking time reading and replying to my post
This is just a fun optimization question. The benefit may be very little in fact! My Splunk searches are already optimized joining 24 million events across 3 sourcetypes in just about 40 seconds sea... See more...
This is just a fun optimization question. The benefit may be very little in fact! My Splunk searches are already optimized joining 24 million events across 3 sourcetypes in just about 40 seconds searching over 30 days by using the stats method for joining data. - https://conf.splunk.com/files/2019/slides/FNC2751.pdf However, before I do all the join operations using stats, I have to first use stats latest() to ensure each event is the latest. That is because all my sourcetypes have historical data, but has unique identifiers. Not all sourcetypes have data every single day, so I have to look back at least 30 days to get a reasonably complete picture. Here's an example stats latest():     <initial search> | fields _time, xxx, xxx, <pick your required fields> | eval coalesced_primary_key=coalesce(sourcetype_1_primary, sourcetype_2_primary, sourcetype_3_primary) | stats latest(*) AS * by coalesced_primary_key     The total events in the index before the implicit search (first line) is run are 24,000,000 events. After the implicit search, but before stats latest() is run, I have 13,000,000 events total. After stats latest() is run, total becomes 750,000 events. What if the "stats latest" pipe was skipped altogether? By somehow making the implied search (first line) to return only the latest events. In other words, cutting the event total from 24,000,000 to 750,000 events directly? That can optimize the query to be much faster if this is possible. I have the unique primary keys for each sourcetype already, so it would be the idea of using latest(sourcetype_1_primary) but in the first line implicit search. I'm afraid my Splunk knowledge doesn't help me there, and googling doesn't seem to pull up anything.
I am seeing the following alert on the Searching and Reporting App and also within the InfoSec App for Splunk. [idx-1,idx-2,sh-2] Could not load lookup=LOOKUP-threatprotect-severity I am not sure h... See more...
I am seeing the following alert on the Searching and Reporting App and also within the InfoSec App for Splunk. [idx-1,idx-2,sh-2] Could not load lookup=LOOKUP-threatprotect-severity I am not sure how to go about troubleshooting this further.  Thx.
Hi All, I have a soap request and response being ingested in the splunk under an index. There are multiple API calls available under the particular index="wireless_retail". How to get the list of al... See more...
Hi All, I have a soap request and response being ingested in the splunk under an index. There are multiple API calls available under the particular index="wireless_retail". How to get the list of all API calls under this index which consume RESPONSETIME > 30sec. My sample soap request and response in splunk is : <getOrderServiceResponse xmlns=> <Case>90476491</Case> <SalesOrderId>8811662</SalesOrderId> <CustType>GW CONSUMER</CustType> <CustNodeId>4000593888</CustNodeId>  <AccountId>4001293845</AccountId> <ServiceName>4372551943</ServiceName> <ServiceId>4000996500</ServiceId> <BillCyclePeriod>11/07/2023 - 06/19/2024</BillCyclePeriod> <NextBillDueDate>06/03/2024</NextBillDueDate> <TabAcctBalance/> <DeviceUnitPrice>0.00</DeviceUnitPrice> <DepositAcctBalance/> <tabAmount>0.00</tabAmount> <tabMonthlyFee>0.00</tabMonthlyFee> <tabDepletionRate>0.00</tabDepletionRate> <deviceOutrightCost>0.00</deviceOutrightCost> <deviceOutrightPayment>0.00</deviceOutrightPayment> <ConnectionFeeDetails> <connectionFee>45.00</connectionFee> <connectionFeePromoCode>CF9 Connection Fee Promo</connectionFeePromoCode> <connectionFeePromoValue>45.00</connectionFeePromoValue> <netConnectionFee>0.00</netConnectionFee> </ConnectionFeeDetails> </getOrderServiceResponse> </soapenv:Body> </soapenv:Envelope>", RETRYNO="0",, OPERATION="getOrderService", METHOD="SOAP", CONNECTORID="48169c3e-9d28-4b8f-9b9f-14ca83299cca", CONNECTORNAME="SingleView", CONNECTORTYPE="Application", CONNECTORSUBTYPE="SOAP", STARTTIME="1715367648945", ENDTIME="1715367688620", RESPONSETIME="39675", So my sample API req is getOrderServiceResponse and getOrderServiceRequest. Like this there are multiple API calls available in the index. I want all the API calls along with the RESPONSETIME in a graph format to know which is consuming more than 30seconds. COuuld you please help?
This was my original query to get the list of apis that failed for a client. I have more details of the client in the lookup table. How can I extract that in the `chart`.  index=application_na sour... See more...
This was my original query to get the list of apis that failed for a client. I have more details of the client in the lookup table. How can I extract that in the `chart`.  index=application_na sourcetype=my_logs:hec source=my_Logger_PROD retrievePayments* returncode=Error | rex field=message "Message=.* \((?<apiName>\w+?) -" | lookup My_Client_Mapping client OUTPUT ClientID ClientName Region | chart count over ClientName by apiName This shows the data like  ClientName RetrievePaymentsA RetrievePaymentsB RetrievePaymentsC Client A 2 1 4 Client B 2 0 3 Client C 5 3 1 How can I add other fields to the output like this ClientId ClientName Region RetrievePaymentsA RetrievePaymentsB RetrievePaymentsC             Any help will be appreciated.
what is the best approach to run splunk queries