All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi,  Good day! I want to export all the data (business transactions) of each and every applications deployed on a server and having AppDynamics agent installed. I want to know also about the below ... See more...
Hi,  Good day! I want to export all the data (business transactions) of each and every applications deployed on a server and having AppDynamics agent installed. I want to know also about the below informations: Source Target Connector API Name Security Integration Pattern Network Connectivity
Hello, preferably based on linux (Redhat for instance), which log collector would you use to collect any kind of log (network devices, Checkpoint Log exporter, system logs, application logs, Window... See more...
Hello, preferably based on linux (Redhat for instance), which log collector would you use to collect any kind of log (network devices, Checkpoint Log exporter, system logs, application logs, Windows servers...)? Any newer solution than using syslog-ng or rsyslog? Thanks.
Is it possible to allow access to the SaaS controller only from an allowed list of IP addresses that is defined by the customer.  I.e. to prevent access from non authorised IP addresses ?
I have a question because I am confused about the majority principle of search header clustering captain election. the question is Assume a dynamic Captain Election. [1] When conducting Captain... See more...
I have a question because I am confused about the majority principle of search header clustering captain election. the question is Assume a dynamic Captain Election. [1] When conducting Captain Election, a majority of the cluster members must vote to be elected. In step 1, more than half means For example, if there are 5 search headers and one is in the Down state, I am confused whether the 3rd generation, which is the majority of the 4th generation, should vote, or whether the 3rd generation, which is the majority of the 5th generation, should vote. In other words, I wonder if it is included even if the target of the vote is down.   [2] Also, according to the majority rule, there must be at least three Search Header. I don't know why Search Header needs to be 3 generations. For example, if there are only two A and B, here B declares to be the captain first, then A votes and B votes for himself, so more than half of the votes are satisfied, so B can become the captain is not it?
Hello, I have a dashboard which is using one token - as a result two things happen: a search is conducted on an index with some static data I want to load data from external source (.txt file... See more...
Hello, I have a dashboard which is using one token - as a result two things happen: a search is conducted on an index with some static data I want to load data from external source (.txt file available online) which changes every hour This is the source:  https://tgftp.nws.noaa.gov/data/observations/metar/stations/KJFK.TXT Desired state: The dashboard takes airport's ICAO code and displays static information about this airport from the index and also loads external current weather information (which changes very frequently). How to load this external data in a dashboard without saving it in Splunk? I would like to state, that I have no option to ingest this data into an index every hour, or create a lookup file locally - this data will change all the time so ideally I would like to fetch it online every time. Many thanks in advance!
I wanted to bring this issue to your attention.  We upgraded from 3.10.0 or DB Connect to 3.11.0 back on November 2022.  We use an external HEC destination for DB Connect to send its to before it get... See more...
I wanted to bring this issue to your attention.  We upgraded from 3.10.0 or DB Connect to 3.11.0 back on November 2022.  We use an external HEC destination for DB Connect to send its to before it gets to Splunk instead of the local/built-in DB Connect destination (and have been for over a year).  There seems to be a bug sending to an external HEC destination.  We started getting complaints in early January 2023 from users that data was missing in Splunk.  We temp moved these inputs back to the internal HEC and the issue went away.  I setup a test DB Connect on 3.11.0 and setup the same inputs on it but sending to external HEC and then to a test index.  We did a search to compare the test data with production data and we saw that throughout the day, there were many times when the inputs ran that the data was not making it into Splunk.  The first clue there was an issue was seeing this in the logs every time the inputs ran: [Scheduled-Job-Executor-3] ERROR c.s.d.s.d.r.HttpEventCollectorLoadBalancer - failed to post events: I remembered that we upgraded DB Connect back in November so I decided to downgrade back to 3.10.0 on the test DB connect server.  The failed to post events error went away and all the data in test and prod matched up with no loss of data.I don't know what changed in DB Connect 3.11.0 and higher (3.11.1 has same issue) but this is a fairly big one for me.  I will stay with 3.10.0 for now but someone from Splunk needs to look into this issue.
Hi I try to list the better ways to map not cCIM compliant data with the good datamodel Is there a better way to use a field alias? And is there another way to correctly map data to be CIM comp... See more...
Hi I try to list the better ways to map not cCIM compliant data with the good datamodel Is there a better way to use a field alias? And is there another way to correctly map data to be CIM compliant? Last question, if we use an addon like the Splunk Add-on for Windows, does it means that our business data will be automatically updated to be CIM compliant? Thanks
Hi All, Need some guidance for calculating SLA  Achieved percentage column.  This is how my results look like after running base search Severity Count_of_Alerts Mean_Time_To_Close SLA Targe... See more...
Hi All, Need some guidance for calculating SLA  Achieved percentage column.  This is how my results look like after running base search Severity Count_of_Alerts Mean_Time_To_Close SLA Target SLA Achieved  in % S1 10 7 mins 8 secs  15 mins   S2 5 6 mins 25 secs 45 min   I have referenced solution provided by @ITWhisperer in https://community.splunk.com/t5/Splunk-Search/adding-percentage-of-SLA-breach/m-p/572942#M199687  but in my case we also have count column.   We are ok with considering only the minutes portion of the Time_to_Close and ignoring the secs if too complicated. How can i calculate my SLA achieved in % . Is it as simple as doing a  | eval SLA_Achieved = (Mean_Time_to_close*SLA_Target)/100 One further optimization would if the SLA % achieved is less than the Target, then perhaps color that cell green else Red in color (something on that lines).
Hello, we run an Indexer that functions as deployment server as well. I have already configured it to use our CA-Cert for the Web-UI port 8000 as well as for the input port 9997, both works prope... See more...
Hello, we run an Indexer that functions as deployment server as well. I have already configured it to use our CA-Cert for the Web-UI port 8000 as well as for the input port 9997, both works properly. However, I wasn't able to set our certificate for communication on the mgm port 8089. For each request, it returns the pre-shipped self-signed certificate. Other solutions from this board didn't work, unfortunately. We are running splunk enterprise v9.0.3 Configs on the indexer: server.conf   [sslConfig] enableSplunkdSSL = true sslVersions = tls1.2 sslRootCAPath = /opt/splunk/etc/auth/<ourcert>.pem sslVerifyServerName = true sslVerifyServerCert = true sslPassword = <PW> cliVerifyServerName = true     inputs.conf   [splunktcp-ssl:8089] disabled = 0 [splunktcp-ssl:9997] disabled = 0 [SSL] serverCert = /opt/splunk/etc/auth/<ourcert>.pem sslPassword = <PW> requireClientCert = false sslVersions = tls1.2 sslCommonNameToCheck = splunk.domain1,splunk.domain2     I'd be really happy, if someone could help me out with this! Thank you!
We have Splunk cloud and SOAR cloud in our environment. We want to integrate SOAR audit log in to Splunk cloud. We have tried with "Splunk App for SOAR" app. App have inbuilt feature of index creatio... See more...
We have Splunk cloud and SOAR cloud in our environment. We want to integrate SOAR audit log in to Splunk cloud. We have tried with "Splunk App for SOAR" app. App have inbuilt feature of index creation so we did it from that inbuit feature.  We can't see any other configuration options. So how this logs can integrate in Splunk.
Hi, I have got a table in a dashboard which shows some information. For every row of the table I want to include a column which shows a link to another dashboard. This link has to be different for ev... See more...
Hi, I have got a table in a dashboard which shows some information. For every row of the table I want to include a column which shows a link to another dashboard. This link has to be different for every row (including the filters in the link like "?form.time_select.earliest=..."). I tried to do that using the $click.value2$ token in the drilldown "Link to custom URL" . The problem with that is, that the token gives back the value of the clicked field but some special characters from the clicked field value are converted to a different format (for example the character "?" is converted into "%3F"). Is there a solution to get the exact string value that is in the clicked field with the $click.value2$ token? Or would there maybe be a better solution to solve my problem?   Thanks for the help!
Hi, I am using Splunk Cloud and we are getting all the logs in IST timezone when IST is my preferred time zone. there are some of the logs reporting in UTC time zone and the logs we are getting t... See more...
Hi, I am using Splunk Cloud and we are getting all the logs in IST timezone when IST is my preferred time zone. there are some of the logs reporting in UTC time zone and the logs we are getting to search head via UTC time zone. i wanted UTC time zone to reflect as IST.  Can you please help me in this way. if the way is to use TZ attribute in props.conf what will be the value for TZ attribute. Please let me know. props.conf must be edited in HF or indexer?  Thanks in advance
Hi All, I have a very simple use case and that is to display the time difference between 2 fields that already have their values as time in epoch format.   But when i use ctime to display the differe... See more...
Hi All, I have a very simple use case and that is to display the time difference between 2 fields that already have their values as time in epoch format.   But when i use ctime to display the difference, it shows weird results.  As shown below my events contains 2 fields ( tt0 & tt1). Their values are  timestamp in EPOCH. If we manually  convert these to Human Readable Time , the difference between the tt0 and tt1 is just 03 mins and xx seconds.   tto tt1 1675061542  1675061732 But when i do a      | eval ttc=tt1-tt0 | convert ctime(ttc)     Splunk displays ttc as follows:   12/31/1969 18:56:49.2304990  What am i doing wrong here?  How to make it display ttc correctly ?
We are planning to upgrade our Splunk_TA_windows app (8.5.0 atm) to the latest version, and during the deep-dive into props and transforms I noticed all these transforms being called from Perfmon sou... See more...
We are planning to upgrade our Splunk_TA_windows app (8.5.0 atm) to the latest version, and during the deep-dive into props and transforms I noticed all these transforms being called from Perfmon sourcetypes. Example:   [Perfmon:Processor] EVAL-cpu_user_percent = if(counter=="% User Time",Value,null()) EVAL-cpu_load_percent = if(counter=="% Processor Time",Value,null()) FIELDALIAS-cpu_instance = instance AS cpu_instance EVAL-cpu_interrupts = if(counter=="Interrupts/sec" AND instance=="_Total",Value,null()) ## Creation of redundant EVAL to avoid tag expansion issue ADDON-10972 EVAL-windows_cpu_load_percent = if(counter=="% Processor Time",Value,null()) FIELDALIAS-dest_for_perfmon = host AS dest FIELDALIAS-src_for_perfmon = host AS src TRANSFORMS-_value_for_perfmon_metrics_store = value_for_perfmon_metrics_store TRANSFORMS-metric_name_for_perfmon_metrics_store = metric_name_for_perfmon_metrics_store TRANSFORMS-object_for_perfmon_metrics_store = object_for_perfmon_metrics_store TRANSFORMS-instance_for_perfmon_metrics_store = instance_for_perfmon_metrics_store TRANSFORMS-collection_for_perfmon_metrics_store = collection_for_perfmon_metrics_store EVAL-metric_type = "gauge"   These transforms seem to extract data and store them in meta fields, like this one:   [value_for_perfmon_metrics_store] REGEX = Value=\"?([^\"\r\n]*[^\"\s]) FORMAT = _value::$1 WRITE_META = true      We have untill now indexed Perfmon data to event indexes - Will these transforms lead to unneccessary data storage on the indexer cluster? Should we comment out the transforms untill we're ready to move Perfmon data over to metrics indexes?
Why does Walklex return spaces before some of the field names, but fieldsummary does not? When I see this without field extractions causing spaces in the field names, it usually looks like "special" ... See more...
Why does Walklex return spaces before some of the field names, but fieldsummary does not? When I see this without field extractions causing spaces in the field names, it usually looks like "special" fields this happens to. But these fields don't seem to exist if I try to search for or using them. Is this as simple as an output parsing bug from walklex or an indexing bug adding a space? If so,  1. Should the space be trimmed or the event be removed to get the correct results? 2. Any context on why this is happening with specific fields? fieldsummary command with no spaces in field names:   index=indexName | fieldsummary | stats count by field   Example results from fieldsummary: field host source sourcetype timestamp walklex command with spaces in field names:   | walklex index=indexName type=field | stats count by field   Example results from walklex: field  host  timestamp host timestamp
I was trying to send data through Splunk HEC (Http event Collector). curl http://ip:8088/services/collector -H "Authorization: Splunk <HEC_TOKEN>" -d '{"event": "Test1"}{"event": "Test2"}{"event": "... See more...
I was trying to send data through Splunk HEC (Http event Collector). curl http://ip:8088/services/collector -H "Authorization: Splunk <HEC_TOKEN>" -d '{"event": "Test1"}{"event": "Test2"}{"event": "Test3"}'   So,  in splunk it will goes like Test1 (-> Event 1) Test2 (-> Event 2) Test3 (-> Event 3) Result I want: Test1Test2Test3  (as 1 event in splunk.)  
Hi I'm implementing some searches provided by Splunk Threat Research Team to detect threats from AD logs. But I cannot set all required fields. For example, one of them is below.  "Windows Comput... See more...
Hi I'm implementing some searches provided by Splunk Threat Research Team to detect threats from AD logs. But I cannot set all required fields. For example, one of them is below.  "Windows Computer Account Requesting Kerberos Ticket" (https://research.splunk.com/endpoint/fb3b2bb3-75a4-4279-848a-165b42624770/) It requires some fields that I cannot find , such as subject, and action. Below is a sample log. I can't find which value I should extract as a "subject" and "action".  I use "WinEventLog:Security" as sourcetype.  I installed the TA-microsoft-windows.  Thank you. LogName=Security SourceName=Microsoft Windows security auditing. EventCode=4768 EventType=0 Type=Information ComputerName=win-dc-128.attackrange.local TaskCategory=Kerberos Authentication Service OpCode=Info RecordNumber=2106676187 Keywords=Audit Success Message=A Kerberos authentication ticket (TGT) was requested. Account Information: Account Name: PC-DEMO$ Supplied Realm Name: attackrange.local User ID: ATTACKRANGE\PC-DEMO$ Service Information: Service Name: krbtgt Service ID: ATTACKRANGE\krbtgt Network Information: Client Address: ::ffff:10.0.1.15 Client Port: 59022 Additional Information: Ticket Options: 0x40800010 Result Code: 0x0 Ticket Encryption Type: 0x12 Pre-Authentication Type: 2 Certificate Information: Certificate Issuer Name: Certificate Serial Number: Certificate Thumbprint:    
please help,i used _time from date log, and i using time from windowstime, but i tried substraction bot of them not result in coloumn durationday   stats max(_time) as lastlogin by user |eval n=tim... See more...
please help,i used _time from date log, and i using time from windowstime, but i tried substraction bot of them not result in coloumn durationday   stats max(_time) as lastlogin by user |eval n=time()|eval today=strftime(n,"%m-%d-%Y %H:%M:%S.%Q")| eval durationday = lastlogin - today | table user,lastlogin,today,durationday   and result this user lastlogin today durationday dsadadnk12 01-30-2023 11:10:27.208 01-30-2023 11:25:14.000  
Hi splunk god, Have enquiry, i have an environment which heavyforwarder logs send to cluster indexer. I need the below multi index merge into single index which is index_general. Basically,... See more...
Hi splunk god, Have enquiry, i have an environment which heavyforwarder logs send to cluster indexer. I need the below multi index merge into single index which is index_general. Basically, when user search index_general and able to search all the logs contain in the three index. 1)Is this configuration feasible? index_fw->index_general index_window->index_general index_linux->index_general 2)If yes, this configuration needs to be done on HF or Indexer? 3)if qns2 yes, which config file should be configured.
Hi I am tracking service requests and responses and trying to create a table that contains both requests and response but requests and responses are in different lines ingested in splunk. I have a c... See more...
Hi I am tracking service requests and responses and trying to create a table that contains both requests and response but requests and responses are in different lines ingested in splunk. I have a common field (trace) which is available in both the strings and unique for a set of request and response pairs,  example line1: trace: 12345 , Request Received: {1}, URL:http:// line2: trace: 12346 , Request Received: {2}, URL:http:// line3: trace:12345 , Reponse provided: {3} line4: trace:12346 , Reponse provided :{4}   In line1 and line 3 trace is common field and so is in line 1 and line 4 I want end result like in a table   trace      request     response 12345   {1}            {3} 12346  {2}            {4}