All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi everyone, I'm trying to extract fields from salesforce from a complex architecture. I created a dedicated index for extracting a log that contains the summary of the order with the various items... See more...
Hi everyone, I'm trying to extract fields from salesforce from a complex architecture. I created a dedicated index for extracting a log that contains the summary of the order with the various items. The structure of the objects is not editable and the query I would like to be able to execute is this: SELECT Id, (SELECT Id, (SELECT Description FROM FulfillmentOrderLineItems) from FulfillmentOrders) FROM OrderSummary Is there a way to extract this log?
Can I configure a Darktrace asset that routes through the automation broker? When Attepmting to configure, the broker is greyed out and will not let me select it. (Yes, the broker is on and active.)
Hello community, I'm having a problem with a probably stupid addition but I can't find a solution. I make a simple query which returns me an account using a field called "routingKey": However, ... See more...
Hello community, I'm having a problem with a probably stupid addition but I can't find a solution. I make a simple query which returns me an account using a field called "routingKey": However, in this example I have duplicate routingKey but with different names (example: routingdynatrace_2 and dynatrace_2 are actually the same source). This is due to a change in the way I collect my data and this has changed the name of the routingKey. The data is however not the same (the data of the routingKey "routingdynatrace_2" is not the same as "dynatrace_2") My question is: how do I add two RoutingKey after the count to get the overall total? I tried to rename the routingKey upstream but the query does not add them after renaming. If you have any ideas, I'm interested. Sincerely, Rajaion
Hi Team, For a business requirement, I need to validate log file generated for last an hour with combination of host and source in below order: Host  Source server001 c\:...\logpath1.txt ... See more...
Hi Team, For a business requirement, I need to validate log file generated for last an hour with combination of host and source in below order: Host  Source server001 c\:...\logpath1.txt server002 c\:...\logpath2.txt server003 c\:...\logpath3.txt server004 c\:...\logpath4.txt server005 c\:...\logpath5.txt   I knew, inputlookup keyword is single column based; however, I need it two columns to check the log file. Can you please suggest what is the best to accomplish my requirement? Thanks in advance!
I'm using Splunk Enterprise 9.1 with Windows Universal Forwarders. I'm ingesting the Windows Domain Contoller netlogon.log file. The Splunk Add-on for Windows has all the parsing/extraction rules def... See more...
I'm using Splunk Enterprise 9.1 with Windows Universal Forwarders. I'm ingesting the Windows Domain Contoller netlogon.log file. The Splunk Add-on for Windows has all the parsing/extraction rules defined for me to parse netlogon.log via its sourcetype=MSAD:NT6:Netlogon definition. Now, my use case is that I only wish to retain certain lines from netlogon.log and discard all others. How can I acheive this? Is it a case of defining a new sourcetype and copying the props/transforms from this Splunk_TA_Windows or is there a way to keep using the sourcetype sourcetype=MSAD:NT6:Netlogon and discard the lines via some other mechanism that does not result in my modidying the Splunk_TA_Windows app? 
Is it possible to delete my App Dynamics account? I have googled and searched for the option for months, ever since my trial expired, but can't seem to find it. I opened an App Dynamics account to le... See more...
Is it possible to delete my App Dynamics account? I have googled and searched for the option for months, ever since my trial expired, but can't seem to find it. I opened an App Dynamics account to learn more about it since I was being put on a project at work. I no longer need the account. If is not possible, is there at least a way to get the update emails to stop?
Hi,   I am getting  "You do not have permissions to access objects of user=admin" error message when using Analytics Store". I am logged in as administrator but still I am getting error.   T... See more...
Hi,   I am getting  "You do not have permissions to access objects of user=admin" error message when using Analytics Store". I am logged in as administrator but still I am getting error.   Thanks, Pravin
Hi everyone, I am currently working with creating data models for Splunk App. For this app, I am planning to design one main Dataset, with multiple child datasets. These child Datasets, are at the eq... See more...
Hi everyone, I am currently working with creating data models for Splunk App. For this app, I am planning to design one main Dataset, with multiple child datasets. These child Datasets, are at the equal level, and might have the fields with same name.  Please note that all the fields are evaluated at the Child dataset level and not at the Root dataset. Also, the type of events in different child datasets might be different, that is, in one child it might be syslog, in another child, it might be JSON, etc. It looks something like this: Datamodel: Datamodel_Test Root Dataset: Root (index IN (main)) Child Dataset: Child1 (sourcetype="child1") Category Severity Name1 Child Dataset: Child2 (sourcetype="child2") Severity Name Root Dataset: Root2 (index IN main) Main questions: Severity is not available in Child2 (| tstats summariesonly=false values(Root.Child2.Severity) from datamodel=Datamodel_Test where nodename=Root.Child2) Name is available in Child2 as it's renamed to Name1 in Child1 (| tstats summariesonly=false values(Root.Child2.Name) from datamodel=Datamodel_Test where nodename=Root.Child2) Also, Root2 is not available as a root datamodel by the query and it's not showing any events. (| tstats summariesonly=false count as Count from datamodel=Datamodel_Test by nodename) We tried different things to get through, though we are stuck at this issue.  Is this an expected behavior or a bug in Splunk?
is it possible to determine which fields are sent from heavy forwarder to another system    i'm asking this because i have problem in TrendMicro can't be readable(logs) from qradar .
Hello Splunk community,  One of my indexes doesn't seem to have indexed any data for the last two weeks or so. This is the logs I see when searching for index="_internal" index_name:   26/05/2024 ... See more...
Hello Splunk community,  One of my indexes doesn't seem to have indexed any data for the last two weeks or so. This is the logs I see when searching for index="_internal" index_name:   26/05/2024 02:19:36.947 // 05-26-2024 02:19:36.947 -0400 INFO Dashboard - group=per_index_thruput, series="index_name", kbps=7940.738, eps=17495.842, kb=246192.784, ev=542437, avg_age=0.039, max_age=1 26/05/2024 02:19:07.804 // 05-26-2024 02:19:07.804 -0400 INFO DatabaseDirectoryManager [12112 IndexerService] - Finished writing bucket manifest in hotWarmPath=/opt/splunk/var/lib/splunk/…/db duration=0.013 26/05/2024 02:19:07.799 // 05-26-2024 02:19:07.799 -0400 INFO DatabaseDirectoryManager [12112 IndexerService] - idx=index_name writing a bucket manifest in hotWarmPath='/opt/splunk/var/lib/splunk/…/db' pendingBucketUpdates=0 innerLockTime=0.009. Reason='Buckets were rebuilt or tsidx-minified (bucket_count=1).' 26/05/2024 02:19:05.944 // 05-26-2024 02:19:05.944 -0400 INFO Dashboard - group=per_index_thruput, series="index_name", kbps=10987.030, eps=24200.033, kb=340566.581, ev=750132, avg_age=0.032, max_age=1 26/05/2024 02:18:59.981 // 05-26-2024 02:18:59.981 -0400 INFO LicenseUsage - type=Usage s="/opt/splunk/etc/apps/…/…/ABC.csv" st="name" h=host o="" idx="index_name" i="41050380-CA05-4248-AFCA-93E310A1E6A9" pool="auto_generated_pool_enterprise" b=6343129 poolsz=5368709120   What could be a reason for this and how could I address it? Thank you for all your help!
Hi Community, actual i have a cron job, thats get every day values for today and tomorrow. How to extract for "today" or "tomorrow" the value? This SPL doesn´t work, and don´t  rename my field ... See more...
Hi Community, actual i have a cron job, thats get every day values for today and tomorrow. How to extract for "today" or "tomorrow" the value? This SPL doesn´t work, and don´t  rename my field to get a fix fieldname... | eval today=strftime(_time,"%Y-%m-%d") | rename "result."+'today' AS "result_today" | stats list(result_today) Here my RAW...  
Hi there, I am trying to get some data from MS Defender into a Splunk query.  My original KQL query in azure contains | JOIN KIND INNER. to concat DeviceProcess and DeviceRegistry tables. The Splu... See more...
Hi there, I am trying to get some data from MS Defender into a Splunk query.  My original KQL query in azure contains | JOIN KIND INNER. to concat DeviceProcess and DeviceRegistry tables. The Splunk app I am using:  Splunk https://splunkbase.splunk.com/app/5518    So basically I'd like to do concatenation between DeviceProcess and DeviceRegistry events in advanced hunting query | advhunt in splunk SPL. Is there a suitable Splunk query for this kind of purpose?
Hello All   Please i need help with the below, i am trying to display a particular column with the below query but i got a 'no results found' output   | inputlookup TagDescriptionLookup.csv | fi... See more...
Hello All   Please i need help with the below, i am trying to display a particular column with the below query but i got a 'no results found' output   | inputlookup TagDescriptionLookup.csv | fields Site UnitName TagName TagDescription Units | where column = "TagName" | rename column AS ColumnName | table ColumnName   Thanks
I installed a new splunk pprod platform and I would like to migrate all the prod data to the new platform. I restored the searchhead prod cluster on the pprod cluster with the backup and restoration... See more...
I installed a new splunk pprod platform and I would like to migrate all the prod data to the new platform. I restored the searchhead prod cluster on the pprod cluster with the backup and restoration of .bundle as indicated in this link: https://docs.splunk.com/Documentation/Splunk/8.2.12/DistSearch/BackuprestoreSHC The problem I have is a difference in the number of lookups between the prod and the pprod (pprod contains 1240 lookups and 58 datamodels while the prod contains 1270 lookups and 59 datamodels). Why do I have this difference even though I restored the pprod cluster with the prod .bundle? What can I do to have the same number on both platforms?
Need to pull the License Usage in GB for the top 100 Host along with their respective Index Source and Souretype information on monthly basis for reports. So kindly help with the query. 
I am following the documentation to log events using javascript. https://dev.splunk.com/enterprise/docs/devtools/javascript/logging-javascript/loggingjavascripthowtos/howtologhttpjs I am sending th... See more...
I am following the documentation to log events using javascript. https://dev.splunk.com/enterprise/docs/devtools/javascript/logging-javascript/loggingjavascripthowtos/howtologhttpjs I am sending the data as below but couldn't see any of the keys in the Splunk log. var payload = { message: { temperature: "70F", chickenCount: 500 } };
Hi Team, One of end point(Get shipping) under the business transaction, it's not being captured consistently, And don't know why it's behaving this way, Does anyone help me on this issue . I just s... See more...
Hi Team, One of end point(Get shipping) under the business transaction, it's not being captured consistently, And don't know why it's behaving this way, Does anyone help me on this issue . I just selected from 8 days data
-in multi-site cluster if initially, the replication factor was site_replication_factor = origin:2,total:2 site_search_factor =origin:1,total:1    and later I change it to site_replication_facto... See more...
-in multi-site cluster if initially, the replication factor was site_replication_factor = origin:2,total:2 site_search_factor =origin:1,total:1    and later I change it to site_replication_factor = origin:2,total:3 site_search_factor =origin:1,total:2   Will the old data also be replicated with  new replication and search factor Or only the new data will have the replication copies as per new replication and search factors  
Hi Team,   We have deployed Splunk Cloud in our environment and currently have a requirement to generate monthly report statistics separately based on Index, Host, Source, and Sourcetype. Could yo... See more...
Hi Team,   We have deployed Splunk Cloud in our environment and currently have a requirement to generate monthly report statistics separately based on Index, Host, Source, and Sourcetype. Could you please provide the queries to pull the required statistics in Splunk? We need separate reports for the top 10 in GB, excluding internal indexes and their sourcetypes. Your assistance with the query is much appreciated.        
Hello Community,   I wondering that i forward the logs using syslog instead of TCP, I received the packets using TcpDump and everything is good but the data not showing there and it's transferred u... See more...
Hello Community,   I wondering that i forward the logs using syslog instead of TCP, I received the packets using TcpDump and everything is good but the data not showing there and it's transferred using tcpdump....   that's my configuration in HF    Outputs.conf   [syslog] defaultGroup = group2 [syslog:remote_siem] server = xx.xx.xx.xx:514 sendCookedData = false transforms.conf   [send_tmds_to_remote_siem] REGEX = . SOURCE_KEY = _MetaData:Index DEST_KEY = _SYSLOG_ROUTING FORMAT = remote_siem [send_tmao_to_remote_siem] REGEX = . SOURCE_KEY = _MetaData:Index DEST_KEY = _SYSLOG_ROUTING FORMAT = remote_siem   props.conf [source::udp:1518] TRANSFORMS-send_tmds_to_remote_siem = send_tmds_to_remote_siem [source::udp:1517] TRANSFORMS-send_tmao_to_remote_siem = send_tmao_to_remote_siem   is it fine or something not correct please help .