All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Need some insights how SAP PO 7.5 can be integrated with SPLUNK
Hi Everyone, I have one requirement .  I have one panel in the dashboard which consists of various fields. There is one field "parent_chain" which is coming from the lookup "parent_chains.csv" Bel... See more...
Hi Everyone, I have one requirement .  I have one panel in the dashboard which consists of various fields. There is one field "parent_chain" which is coming from the lookup "parent_chains.csv" Below is the code for the panel: <row> <table> <search> <query>index=abc sourcetype=xyz source="/user.log" process-groups | convert timeformat="%Y-%m-%d" ctime(_time) AS Date|rename count as "Request Counts" |rex field=Request_URL "(?&lt;id&gt;[A_Za-z0-9]{8}[\-][A_Za-z0-9]{4}[\-][A_Za-z0-9]{4}[\-][A_Za-z0-9]{4}[\-][A_Za-z0-9]{12})"|stats count by Date ADS_Id Request_Type id ClickHere Request_URL|sort - ADS_Id |join type=outer id [inputlookup  parent_chains.csv]|where $Teams$</query> <earliest>$field1.earliest$</earliest> <latest>$field1.latest$</latest> <sampleRatio>1</sampleRatio> </search> <fields>"Date", "ADS_Id","Request_Type", "Request_URL", "id", "parent_chain"</fields> <option name="count">100</option> <option name="dataOverlayMode">none</option> <option name="drilldown">cell</option> <option name="percentagesRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> </table> </row> The parent chain data is like this: MAIN-->root-->Oneforce MAIN-->root-->Oneforce-->AP_Automation MAIN-->root-->BLAZE - E3-->Lead IB jobs MAIN-->root-->BLAZE - E3-->Blaze Transformation - Data Ingestion ..................................... ....................................... My requirement is that I need to create one text box and when I type like "Data Ingestion" in text box then all the parent_chain which consists of Data Ingestion should come. If I write suppose "Blaze Transformation" in text box then all the parent_chain which consists of "Blaze Transformation" should come. I have created text box like this: </input> <input type="text" token="ckey" searchWhenChanged="true"> <label>Parent Chain</label> <prefix>parent_chain="*</prefix> <suffix>*"</suffix> </input> Can anyone guide me on this. How I can achieve this.
I'm performing a lookup against a csv and need to use two columns (description and function) to return the correct value. However, I have a case where I need to explicitly specify the function, while... See more...
I'm performing a lookup against a csv and need to use two columns (description and function) to return the correct value. However, I have a case where I need to explicitly specify the function, while using a description value from an event. Here's a lookup that works for me: | lookup products.csv ProductDescription as description, ProductFunction as function OUTPUTNEW Name as ProductName What I need to do is this: | lookup products.csv ProductDescription as description, ProductFunction as "Kitchen Appliance" OUTPUTNEW Name as ProductName When the event happens to contain "Kitchen Appliance" as the function, the lookup works, but if I explicitly specify "Kitchen Appliance" as the lookup value, nothing is returned. Is this usage supported?    
I'm looking for advice for avoiding the 'Funnel Effect' for a small number of intermediate Universal Forwarders to about 15 cloud indexers - my best lead is to apply parallelization - has anyone had ... See more...
I'm looking for advice for avoiding the 'Funnel Effect' for a small number of intermediate Universal Forwarders to about 15 cloud indexers - my best lead is to apply parallelization - has anyone had success with that, specifically to the Cloud version?  Are there alternatives?   Here is a presentation on that issue, where the problem is described at depth but the parallelization solution is only briefly touched upon: Video - https://www.splunk.com/en_us/resources/videos/worst-practices-and-how-to-fix-them.html Presentation - https://conf.splunk.com/files/2017/slides/worst-practicesand-how-to-fix-them.pdf Thanks for any advice!
Hello, New to Splunk.  Can Splunk monitor or audit changes to a Netapp file share?  We would like to know about file or folder deletions, moves on those shares, etc.  I have installed and configured... See more...
Hello, New to Splunk.  Can Splunk monitor or audit changes to a Netapp file share?  We would like to know about file or folder deletions, moves on those shares, etc.  I have installed and configured the ONTAP add-on and application and have data in. Additionally, I am trying to convert my WinEventLog:Application logs to a metrics index, but want to retain the event data from this sourcetype as well.  Is it possible to have both?  Will this affect my license usage if I am sending the same data to both a metrics index and an event index?   Thank you  
absolutely getting nowhere with this.  Tried running the auto-generated scripts and creating the install manually and getting the same results. Getting  write splunk plugin: curl_easy_perform faile... See more...
absolutely getting nowhere with this.  Tried running the auto-generated scripts and creating the install manually and getting the same results. Getting  write splunk plugin: curl_easy_perform failed to connect to colart.splunkcloud.com:8088 with status 7: Couldn't connect to server Made sure collectd.conf is correct: <Plugin write_splunk> server "my splunk cloud URL" port "8088" token "My HEC token" ssl true verifyssl false Dimension "entity_type:nix_host"   Can anyone help?
We are seeing logs like this that might contain a base64 encoded encrypted password and we would like to know if this is a risk of leaving this in Splunk since we don't know how strong this encryptio... See more...
We are seeing logs like this that might contain a base64 encoded encrypted password and we would like to know if this is a risk of leaving this in Splunk since we don't know how strong this encryption is, and if possible to remove it from the source or should we make a filter in a transforms to discard this information Dec 9 15:54:07 10.X.Y.Z date=2020-12-09 time=15:54:08 devname="FORTIGATE" devid="FG100" logid="0100044547" type="event" subtype="system" level="information" vd="root" eventtime=1607547248698893780 tz="-0500" logdesc="Object attribute configured" user="admin" ui="GUI(10.X.Y.Z)" action="Add" cfgtid=10289326 cfgpath="user.local" cfgobj="USER" cfgattr="type[password]passwd[ENC JcB/fAvi7lxawB9OqPN2t8WE4MnLXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXvmv1EMRzPZPdMAhWppDbaqNyr1tGx5eDmxg==]" msg="Add user.local USER"  
Hi,  Follow these steps to reproduce the problem: 1. Create two tcp data inputs. In my case I used ports 9850 and 9860 2. Send data continuously to port 9850. I happened to use a few lines of C# t... See more...
Hi,  Follow these steps to reproduce the problem: 1. Create two tcp data inputs. In my case I used ports 9850 and 9860 2. Send data continuously to port 9850. I happened to use a few lines of C# to accomplish this, but I've also reproduced the issue in other programming languages. My source is appended below.  3. Disable port 9860 using the UI. Our connection to port 9850 is immediately closed and an exception is thrown.    Does anyone know why this happens? I would have thought that only connections to port 9860, the port we are closing, would be disrupted.  Thanks, John   C# code to send data to tcp data input: TcpClient client = new TcpClient("172.30.40.126", 9850); string message = "Test log event"; Byte[] data = System.Text.Encoding.ASCII.GetBytes(message); NetworkStream stream = client.GetStream(); while (true) { stream.Write(data, 0, data.Length); }
I am trying to work around not having access to the _internal index; I can't get access at this time. I want to add annotations to a dashboard showing the last time certain alerts triggered. I know h... See more...
I am trying to work around not having access to the _internal index; I can't get access at this time. I want to add annotations to a dashboard showing the last time certain alerts triggered. I know how to get an annotation working; I used loadjob but the issue is I can't get historical data accurately it seems. I want to be able to look at the previous day and then see alerts that fired for the time period.    I was doing something like the following; I haven't used REST much and am still exploring it:   |rest /servicesNS/-/-/searches |join title [| rest /servicesNS/-/-/alerts/fired_alerts]      
Hi guys,  I'm looking to add a new column to my inputlookup.  The idea is to mark the values that repeat e.g.:  Email Threat Attachment Delivered Analytics Voice mail spam Network Jira ... See more...
Hi guys,  I'm looking to add a new column to my inputlookup.  The idea is to mark the values that repeat e.g.:  Email Threat Attachment Delivered Analytics Voice mail spam Network Jira Heartbeat N/A Voice mail spam Network   And do something like this:  Name Zone Repeats Email Threat Attachment Delivered Analytics No Voice mail spam Network Yes Jira Heartbeat N/A No Voice mail spam Network Yes   I know I can just do a | stats count by Name, but that doesn't solve my problem. Any hints on how to solve it would be greatly appreciated  
Hi There, Below is the logic I am trying to achieve: Perform enrichment on a list of host via information extraction using a Spunk's run query action. The resulting results will then be added as an... See more...
Hi There, Below is the logic I am trying to achieve: Perform enrichment on a list of host via information extraction using a Spunk's run query action. The resulting results will then be added as an artifact. 1. Get  List of Hosts and run each one of them against step 2. 2. Run a Splunk query which aims to get the host info (e.g. ldap details) 3. Extract the details from the splunk query and add an artifact. 4. Loop until steps 2-4 until the all the host in the list has been processed. Issue: The first loop is working fine, I am able to successfully run the query, extract the fields and place them in a format block and then add them as an artifact. The problem comes in the succeeding searches, the splunk searches appears to be successful, but the resulting data is not passed properly, the value appears to show as None. Note that the same format block works for every first time you run the playbook. Another thing to note is that i am using a Join Function and using the built in add artifact action. Cheers, Carl        
Hi All, I am working on Transaction Logs where I have a log field with the below data. Below is an example of the data I have: Transaction_Log : {"message":"Some valid text", "Some valid text",... See more...
Hi All, I am working on Transaction Logs where I have a log field with the below data. Below is an example of the data I have: Transaction_Log : {"message":"Some valid text", "Some valid text", "Some valid text"} {"message":"Some valid text", "Some valid text", "Some valid text", "Some valid text"} {"message":"Some valid text", "Some valid text", "Some valid text"} {"message":"Printing data before Cart event-->(CartEvents__e:{CartId=A1234567, CartVolume=1034aef, CartCapacity=2000edc, CartExpiry=2nd Dec}), CartEvents__e:{CartId=A1234568, CartVolume=1045dkl, CartCapacity=2000lmn, CartExpiry=4rth Apr}), "levelNumber":"ER234", "logger":"ABCD"} {"message":"Some valid text", "Some valid text", "Some valid text", "Some valid text", "Some valid text"} In the above example, the CartEvents__e is a group of key value pairs where there are many fields apart from what I have given here. Also, there can multiple instances of CartEvents__e (two in my example data). There can be as many as 10 instances of CartEvents__e.  How can I write my query in such a way that the output looks as below: (I have referred the documentation, but I am not able to figure this out correctly) CartId                                CartVolume                           CartCapacity A1234567                       1034aef                                  2000edc A1234568                        1045dkl                                 2000lmn I am not able to figure out how exactly this can be done. Any help is greatly appreciated.
Hello all, I want to join 2 search based upon one common filed from both the search ,  index=14 search_name="Daily Counts" |rename A_USER_NM as USER_NM|table Date USER_NM FILE_ID FILE_NM filecount|... See more...
Hello all, I want to join 2 search based upon one common filed from both the search ,  index=14 search_name="Daily Counts" |rename A_USER_NM as USER_NM|table Date USER_NM FILE_ID FILE_NM filecount| join USER_NM [ search index=14 earliest=-24h@h latest=now sourcetype=user source=O  001 | dedup USER_NM | table USER_NM USER_ID indicator ] I tried above search its working but its not displaying all results, few records are getting missed, let me know where am I making mistake or any other method that i can use.
Hi Team,  We are using appd .net core Linux alpine image 20.7.0 image for our .net core 3.1 application and trying to deploy appdynamics with it.  For .net applications that are using kestrel s... See more...
Hi Team,  We are using appd .net core Linux alpine image 20.7.0 image for our .net core 3.1 application and trying to deploy appdynamics with it.  For .net applications that are using kestrel service are giving exception when Appd agent is getting started.  Unhandled exception while processing 0HM4SAP0EPM6O. Category Name: Microsoft.AspNetCore.Server.Kestrel Stack Trace System.AggregateException:    at Microsoft.Extensions.Logging.Logger.ThrowLoggingError (Microsoft.Extensions.Logging, Version=3.1.8.0, Culture=neutral, PublicKeyToken=adb9793829ddae60)    at Microsoft.Extensions.Logging.Logger.Log (Microsoft.Extensions.Logging, Version=3.1.8.0, Culture=neutral, PublicKeyToken=adb9793829ddae60)    at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.Infrastructure.KestrelTrace.Log (Microsoft.AspNetCore.Server.Kestrel.Core, Version=3.1.8.0, Culture=neutral, PublicKeyToken=adb9793829ddae60)    at Microsoft.Extensions.Logging.LoggerExtensions.Log (Microsoft.Extensions.Logging.Abstractions, Version=3.1.8.0, Culture=neutral, PublicKeyToken=adb9793829ddae60)    at Microsoft.Extensions.Logging.LoggerExtensions.LogCritical (Microsoft.Extensions.Logging.Abstractions, Version=3.1.8.0, Culture=neutral, PublicKeyToken=adb9793829ddae60)    at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.HttpConnection+<ProcessRequestsAsync>d__12`1.MoveNext (Microsoft.AspNetCore.Server.Kestrel.Core, Version=3.1.8.0, Culture=neutral, PublicKeyToken=adb9793829ddae60)    at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)    at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)    at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.ConnectionLimitMiddleware+<OnConnectionAsync>d__5.MoveNext (Microsoft.AspNetCore.Server.Kestrel.Core, Version=3.1.8.0, Culture=neutral, PublicKeyToken=adb9793829ddae60)    at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)    at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification (System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)    at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.Infrastructure.KestrelConnection+<ExecuteAsync>d__32.MoveNext (Microsoft.AspNetCore.Server.Kestrel.Core, Version=3.1.8.0, Culture=neutral, PublicKeyToken=adb9793829ddae60) Inner exception System.InvalidProgramException handled at Microsoft.Extensions.Logging.Logger.ThrowLoggingError:    at HoneywellEOM.ModelRegistry.Utils.LoggerScopeLogger.Log (HoneywellEOM.ModelRegistry.Util, Version=0.0.0.0, Culture=neutral, PublicKeyToken=null)    at Microsoft.Extensions.Logging.Logger.<Log>g__LoggerLog|12_0 (Microsoft.Extensions.Logging, Version=3.1.8.0, Culture=neutral, PublicKeyToken=adb9793829ddae60) we have also raised a ticket with AppD support: https://help.appdynamics.com/hc/en-us/requests/264921?page=1   AppD dot net configs we are using  ASPNETCORE_URLS: "http://*:5000" CORECLR_ENABLE_PROFILING: "1" CORECLR_PROFILER: "{57e1aa68-2229-41aa-9931-a6e93bbc64d8}" CORECLR_PROFILER_PATH: "/opt/appdynamics/dotnet/libappdprofiler.so" LD_LIBRARY_PATH: "/opt/appdynamics/dotnet" @Anonymous can you help? ^ Edited by @Ryan.Paredez for readability and removing an email address. Please do not share your or others email addresses in community posts for security and privacy reasons
Hello, When I try to open Splunk Add-on for OSSEC application from APP menu give me 404 Error Not Found. I removed the folder from addons and installed again but the problem is the same.
Hi, I'm indexing events in JSON format and I need a way of extracting into individual fields the values broken up by the pipe in the 'Subject' field seen below; RecipientAddress: bla@bla.com Sende... See more...
Hi, I'm indexing events in JSON format and I need a way of extracting into individual fields the values broken up by the pipe in the 'Subject' field seen below; RecipientAddress: bla@bla.com SenderAddress: fred@fred.com Size: 201828 Status: FilteredAsSpam Subject: 1|fdbe21c9-xxxxx|195.168.1.1|Comms@fred.com|([Ext]Hi, join us for the 10-year roundup) 12/11/2020 8:21:14 AM ToIP: null I seem to be struggling to get a regex to work, not sure whether I need to take into account the JSON formatting? Thanks.
Hello Is it possible to forward _internal data of an indexer to a  third-party systems? I would like to forward some production events and also all the _internal events of splunk deployment includin... See more...
Hello Is it possible to forward _internal data of an indexer to a  third-party systems? I would like to forward some production events and also all the _internal events of splunk deployment including events of indexers. Thank you, Ondrej  
I am using DB connect 3.4.0 to get data from SQL Server. I have 2 working inputs with rising id column and timestamp (DATETIME) to index data. Server and table structure is same for all inputs. But ... See more...
I am using DB connect 3.4.0 to get data from SQL Server. I have 2 working inputs with rising id column and timestamp (DATETIME) to index data. Server and table structure is same for all inputs. But for 2 inputs, I get this message in db_connect_server.log : Thank for your help.
I can able to search from splunk web using the below string: cs_uri_stem="*/reporting/rptttt.xls" AND (cs_uri_query="reportName=ddd+Certification")|stats count by AssociateOID, OrgOID, date, o, repo... See more...
I can able to search from splunk web using the below string: cs_uri_stem="*/reporting/rptttt.xls" AND (cs_uri_query="reportName=ddd+Certification")|stats count by AssociateOID, OrgOID, date, o, reportName but when i use the same search string while REST API call's its not working. curl -ku username:paswd https://splunkapiurl:port/servicesNS/admin/search/search/jobs/export -d search=“search cs_uri_stem="*/reporting/rptttt.xls" AND (cs_uri_query="reportName=ddd+Certification")|stats count by AssociateOID, OrgOID, date, o, reportName” -d output_mode=csv   Please help me out resolving the issue.
Hello Splunkers, We may have around 400UFs forwarding 1GB of events each UF, with total estimated daily ingestion of 400GB each day. Our main aim is to forward these events to the customer's indexe... See more...
Hello Splunkers, We may have around 400UFs forwarding 1GB of events each UF, with total estimated daily ingestion of 400GB each day. Our main aim is to forward these events to the customer's indexers(customer also has indexer cluster), however with the strong requirement from the customer to provide us a validation for logs we need to store it at our indexer as well. (ok with more license consumption) We have decided to use indexer cluster with 2 indexers for us as well. So our indexer cluster will be performing dual role (storing event + forwarding (with anonymizing names in events)) using indexAndForward configuration. We are not allowed to use HF in between as customer sees this in a way that different version of same event (edited version for us & different edit for them) may be received. Or wont allow to send same event directly from UF to their indexer + our indexer. Our indexer has specs as 32GB RAM, 24 vCPUs & xxTBs (RAID 10). Every thing is in a single data centre. We have mainframe logs requirement in future as well. Will this suffice our need? Some one said me that if I'm sending events from UFs directly indexers, it will open multiple queues at indexers and will hamper performance to a greater extent, is that true ???(I don't believe so)  Can one let me know how to estimate how many indexers are required based on daily ingestion capacity considering RF2 & SF2 OR RF1 & SF1?? I've attached a diagram for better understanding.