All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

You can continue to use the props/transforms from the Splunk Windows TA BUT you need to use the /local/props.conf and /local/transforms.conf So, create a local folder within the TA and add the two ... See more...
You can continue to use the props/transforms from the Splunk Windows TA BUT you need to use the /local/props.conf and /local/transforms.conf So, create a local folder within the TA and add the two files, if you change the default props and trans, they will get overwritten during upgrades in the future.    So, configure as per below example, you will need to work out what events you want to discard, so some regex, and this is the better way, the rest will get logged into Splunk.     props.conf [MSAD:NT6:Netlogon] TRANSFORMS-send_to_null_events = send_null_netlogin_events   # transforms.conf [send_null_netlogin_events] REGEX = <YOUR REGEX FOR LINES YOU DONT WANT> DEST_KEY = queue FORMAT = nullQueue   The above code then needs to placed on the Indexers or Heavy forwarder - if the data is sent here first (Splunk Full Instances), so deploy the Windows TA that contains your new code. Note: The UF will not do it) Another way is to create your own side car TA and have the code there and run it alongside the Windows TA.
Can I configure a Darktrace asset that routes through the automation broker? When Attepmting to configure, the broker is greyed out and will not let me select it. (Yes, the broker is on and active.)
| eval routingkey=if(routingkey="routingdynatrace_2","dynatrace_2",routingkey) | stats sum(count) as count by routingkey
@sylim_splunk It's not just a display problem. Queries using the rest interface also return no results. But the strange thing is that on some deployment servers it works without problems on the sa... See more...
@sylim_splunk It's not just a display problem. Queries using the rest interface also return no results. But the strange thing is that on some deployment servers it works without problems on the same version, but a large part does not allow rest on /services/deployment/server/clients. On servers where the rest query don't work, there's also zero results under Forwarder Management/Clients. Regards Marco
Hi @Orange_girl , it seems that something changed: Splunk hasn't more the requested permissions on the files to read: check them. Ciao. Giuseppe
Thanks Giuseppe. The logs I shared here are the last logs I received for this index.  I also checked logs for ABC.csv which is used by the index, and same here - logs only until May 26th: 26/05/202... See more...
Thanks Giuseppe. The logs I shared here are the last logs I received for this index.  I also checked logs for ABC.csv which is used by the index, and same here - logs only until May 26th: 26/05/2024 02:19:39.647 // 05-26-2024 02:19:39.647 -0400 WARN TailReader [12321 tailreader0] - Access error while handling path: failed to open for checksum: '/opt/splunk/etc/apps/.../.../ABC.csv' (No such file or directory) 26/05/2024 02:19:38.208 // 05-26-2024 02:19:38.208 -0400 INFO WatchedFile [12321 tailreader0] - Will begin reading at offset=0 for file='/opt/splunk/etc/apps/.../.../ABC.csv'. 26/05/2024 02:19:38.208 // 05-26-2024 02:19:38.208 -0400 INFO WatchedFile [12321 tailreader0] - Checksum for seekptr didn't match, will re-read entire file='/opt/splunk/etc/apps/.../.../ABC.csv'. 26/05/2024 02:19:37.621 // 05-26-2024 02:19:37.621 -0400 WARN TailReader [12321 tailreader0] - Insufficient permissions to read file='/opt/splunk/etc/apps/.../.../ABC' (hint: No such file or directory , UID: 0, GID: 0). 26/05/2024 02:19:37.512 // 05-26-2024 02:19:37.512 -0400 INFO WatchedFile [12321 tailreader0] - Will begin reading at offset=0 for file='/opt/splunk/etc/apps/.../.../ABC.csv'. 26/05/2024 02:19:37.512 // 05-26-2024 02:19:37.512 -0400 WARN LineBreakingProcessor [12299 parsing] - Truncating line because limit of 10000 bytes has been exceeded with a line length >= 50968856 - data_source="/opt/splunk/etc/apps/.../.../ABC.csv", data_host="host", data_sourcetype="sourcetype" 26/05/2024 02:19:37.512 // 05-26-2024 02:19:37.512 -0400 INFO WatchedFile [12321 tailreader0] - Will begin reading at offset=0 for file='/opt/splunk/etc/apps/.../.../ABC.csv'. 26/05/2024 02:19:37.143 // 05-26-2024 02:19:37.143 -0400 WARN LineBreakingProcessor [12299 parsing] - Truncating line because limit of 10000 bytes has been exceeded with a line length >= 50276856 - data_source="/opt/splunk/etc/apps/.../.../ABC.csv", data_host="host", data_sourcetype="sourcetype" 26/05/2024 02:19:36.947 // 05-26-2024 02:19:36.947 -0400 INFO Dashboard - group=per_source_thruput, series="/opt/splunk/etc/apps/.../.../ABC.csv", kbps=219.057, eps=482.877, kb=6791.592, ev=14971, avg_age=0.000, max_age=0   Would this be of any help?
Hello community, I'm having a problem with a probably stupid addition but I can't find a solution. I make a simple query which returns me an account using a field called "routingKey": However, ... See more...
Hello community, I'm having a problem with a probably stupid addition but I can't find a solution. I make a simple query which returns me an account using a field called "routingKey": However, in this example I have duplicate routingKey but with different names (example: routingdynatrace_2 and dynatrace_2 are actually the same source). This is due to a change in the way I collect my data and this has changed the name of the routingKey. The data is however not the same (the data of the routingKey "routingdynatrace_2" is not the same as "dynatrace_2") My question is: how do I add two RoutingKey after the count to get the overall total? I tried to rename the routingKey upstream but the query does not add them after renaming. If you have any ideas, I'm interested. Sincerely, Rajaion
Hi @Orange_girl , check if you received logs until the 31st of May, if yes and data flow stopped at 1st of June, check the timestamp format because probably you missed a configuration, but until the... See more...
Hi @Orange_girl , check if you received logs until the 31st of May, if yes and data flow stopped at 1st of June, check the timestamp format because probably you missed a configuration, but until the 31st of May you didn't discover it. the check the time forma of your data. Ciao. Giuseppe 
Hi Giuseppe,  I haven't changed anything in SPLUNK and the indexing used to work well, would this just randomly change by itself?  I'm happy to check it though, could you let me know where and what... See more...
Hi Giuseppe,  I haven't changed anything in SPLUNK and the indexing used to work well, would this just randomly change by itself?  I'm happy to check it though, could you let me know where and what I should be looking for? Are you referring to the time value in logs? thank you. 
Hi @ganeshkumarmoha  I suppose that you need to check if you' received events from each host with that source, is it correct? if this is your requirement and if the source column has a fixed part t... See more...
Hi @ganeshkumarmoha  I suppose that you need to check if you' received events from each host with that source, is it correct? if this is your requirement and if the source column has a fixed part that you can use for checking (e.g. the file name without path), please tru something like this: <your_search> | rex field=source "\\(?<Source>logpath\d*.txt)$" | rename host AS Host | stats count BY host Source | append [ | inputlookup your_lookup.csv | eval count=0 | fields Host Source count ] | stats sum(count) AS total BY Host Source | where total=0 Ciao. Giuseppe
Hi Team, For a business requirement, I need to validate log file generated for last an hour with combination of host and source in below order: Host  Source server001 c\:...\logpath1.txt ... See more...
Hi Team, For a business requirement, I need to validate log file generated for last an hour with combination of host and source in below order: Host  Source server001 c\:...\logpath1.txt server002 c\:...\logpath2.txt server003 c\:...\logpath3.txt server004 c\:...\logpath4.txt server005 c\:...\logpath5.txt   I knew, inputlookup keyword is single column based; however, I need it two columns to check the log file. Can you please suggest what is the best to accomplish my requirement? Thanks in advance!
I'm using Splunk Enterprise 9.1 with Windows Universal Forwarders. I'm ingesting the Windows Domain Contoller netlogon.log file. The Splunk Add-on for Windows has all the parsing/extraction rules def... See more...
I'm using Splunk Enterprise 9.1 with Windows Universal Forwarders. I'm ingesting the Windows Domain Contoller netlogon.log file. The Splunk Add-on for Windows has all the parsing/extraction rules defined for me to parse netlogon.log via its sourcetype=MSAD:NT6:Netlogon definition. Now, my use case is that I only wish to retain certain lines from netlogon.log and discard all others. How can I acheive this? Is it a case of defining a new sourcetype and copying the props/transforms from this Splunk_TA_Windows or is there a way to keep using the sourcetype sourcetype=MSAD:NT6:Netlogon and discard the lines via some other mechanism that does not result in my modidying the Splunk_TA_Windows app? 
Hi @dhvanilshah , Use calculated fields with all the conditions to manage the different extraction ways,  Ciao. Giuseppe
I suppose one can argue that it falls under the "cannot use sampling" restriction but I agree that it could be more explicitly worded. There is a feedback form at the end of the docs page. You're mo... See more...
I suppose one can argue that it falls under the "cannot use sampling" restriction but I agree that it could be more explicitly worded. There is a feedback form at the end of the docs page. You're more than welcome to send feedback about this missing info. They do read it and react!
Hi @gcusello,  When I run the child constraint searches in preview mode, I am able to see the fields extracted. It seems that Splunk is not supporting different evals across the child dataset for sa... See more...
Hi @gcusello,  When I run the child constraint searches in preview mode, I am able to see the fields extracted. It seems that Splunk is not supporting different evals across the child dataset for same fieldname.  So, in my case, Root1 doesn't have any fields extracted.  Child1 and Child2 have fields extracted in different way, and they have the same name, ie, Severity, Name, etc.  Is this what you're asking? If not, could you please help me understand? 
Is it possible to delete my App Dynamics account? I have googled and searched for the option for months, ever since my trial expired, but can't seem to find it. I opened an App Dynamics account to le... See more...
Is it possible to delete my App Dynamics account? I have googled and searched for the option for months, ever since my trial expired, but can't seem to find it. I opened an App Dynamics account to learn more about it since I was being put on a project at work. I no longer need the account. If is not possible, is there at least a way to get the update emails to stop?
Hi,   I am getting  "You do not have permissions to access objects of user=admin" error message when using Analytics Store". I am logged in as administrator but still I am getting error.   T... See more...
Hi,   I am getting  "You do not have permissions to access objects of user=admin" error message when using Analytics Store". I am logged in as administrator but still I am getting error.   Thanks, Pravin
Hi @dhvanilshah , what about running the contrain searches for each child, have you all the fields you need? if not, you have to redesign your DataModel, if yes, you should try to add the missing ... See more...
Hi @dhvanilshah , what about running the contrain searches for each child, have you all the fields you need? if not, you have to redesign your DataModel, if yes, you should try to add the missing fields to the fields of each child. Ciao. Giuseppe
Hi @hazem , did you read https://docs.splunk.com/Documentation/Splunk/9.2.1/Indexer/Migratetomultisite#How_the_cluster_migrates_and_maintains_existing_buckets ? especially "If you have a large numb... See more...
Hi @hazem , did you read https://docs.splunk.com/Documentation/Splunk/9.2.1/Indexer/Migratetomultisite#How_the_cluster_migrates_and_maintains_existing_buckets ? especially "If you have a large number of existing buckets, the process can take a long time to complete". Anyway, migration from single site to multisite is a job for a Certified Splunk Architect, not for the Community. Ciao. Giuseppe
Hi everyone, I am currently working with creating data models for Splunk App. For this app, I am planning to design one main Dataset, with multiple child datasets. These child Datasets, are at the eq... See more...
Hi everyone, I am currently working with creating data models for Splunk App. For this app, I am planning to design one main Dataset, with multiple child datasets. These child Datasets, are at the equal level, and might have the fields with same name.  Please note that all the fields are evaluated at the Child dataset level and not at the Root dataset. Also, the type of events in different child datasets might be different, that is, in one child it might be syslog, in another child, it might be JSON, etc. It looks something like this: Datamodel: Datamodel_Test Root Dataset: Root (index IN (main)) Child Dataset: Child1 (sourcetype="child1") Category Severity Name1 Child Dataset: Child2 (sourcetype="child2") Severity Name Root Dataset: Root2 (index IN main) Main questions: Severity is not available in Child2 (| tstats summariesonly=false values(Root.Child2.Severity) from datamodel=Datamodel_Test where nodename=Root.Child2) Name is available in Child2 as it's renamed to Name1 in Child1 (| tstats summariesonly=false values(Root.Child2.Name) from datamodel=Datamodel_Test where nodename=Root.Child2) Also, Root2 is not available as a root datamodel by the query and it's not showing any events. (| tstats summariesonly=false count as Count from datamodel=Datamodel_Test by nodename) We tried different things to get through, though we are stuck at this issue.  Is this an expected behavior or a bug in Splunk?