All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

what is thruput in limits.conf of Universal forwarder? What will it do? What it its location? Throughput and thruput (both are same or different)
Dear all, despite my best efforts, I was not able to find satisfactory information. Thus I would like to ask if anyone here can help me with this. We have the UF running in a docker container in a ... See more...
Dear all, despite my best efforts, I was not able to find satisfactory information. Thus I would like to ask if anyone here can help me with this. We have the UF running in a docker container in a k8s environment. For getting data in, we are using batch/monitor on files stored on a persistant volume claim. Consider the following scenario: - The container the UF is running in gets restarted while the UF is processing a file. After booting back up, the UF re-processes the entire file, leading to duplicates on the indexer Is this something we need to consider, for example by checking that the UF is currently not processing anything before restarting? Or will the UF take care of all of this for us?
I'm a bit lost. Every piece of info that I find on the web (as well as materials from the Splunk's own trainings) say that UF does only very limited input preparation (line breaking, metadata adjustm... See more...
I'm a bit lost. Every piece of info that I find on the web (as well as materials from the Splunk's own trainings) say that UF does only very limited input preparation (line breaking, metadata adjustment, character encoding) but no real parsing work. Thus I'm confused to find in logs that, for example: 12-03-2021 15:50:44.906 +0100 WARN DateParserVerbose - Failed to parse timestamp in first MAX_TIMESTAMP_LOOKAHEAD (128) characters of event. Defaulting to timestamp of previous event (Fri Dec 3 15:50:36 2021). Context:[...] That would mean that some timestamp parsing does take place. But do I still need to put my timestamp extraction config on the HF? (I use UF -> HF-> idx) How does it correspond to possible settings about breaking on timestamp? If I want to break (which should happen on UF, right?) on timestamp, do I need to provide timestamp format on both UF (for breaking) and HF (for parsing)?
Hello, I would like to ask, if it is possible to pass a time restriction to a subsearch of an join ? Unfortunately I did not find anything fitting in the forum. In my specific case I would like to ... See more...
Hello, I would like to ask, if it is possible to pass a time restriction to a subsearch of an join ? Unfortunately I did not find anything fitting in the forum. In my specific case I would like to enrich the results of search1 with the last event of search2, in which the ID is equal and the timestamp of search2 is not more than 5 minutes before the timestamp of search1.   index="summary_index" search_name="search1" ...|fields _time ID ... |join type=left left=L right=R usetime=true earlier=true where L.ID=R.ID [search index="summary_index" search_name="search2" |fields ...]    Does someone have an idea? Thanks in advance!
Hello everyone, Here's the situation : indexer1, deployment server role indexer 2 fowarder 1.   I distributed via the deployment server a new outputs.conf with : [tcpout]: defaultGroup = inde... See more...
Hello everyone, Here's the situation : indexer1, deployment server role indexer 2 fowarder 1.   I distributed via the deployment server a new outputs.conf with : [tcpout]: defaultGroup = indexer1,indexer2 [tcpout:indexer1] server = xx.xx.xx.xx:9997 [tcpout:indexer2] server = indexer2.com:9997 There is a VS between forwarder1 and indexer2. I activated the DEBUG in log.cfg for TcpOutPutProc The log on forwarder1 tells me only : 12-03-2021 15:08:15.743 +0100 DEBUG TcpOutputProc - channel not registered yet 12-03-2021 15:08:15.743 +0100 DEBUG TcpOutputProc - Connection not available. Waiting for connection ... and 12-03-2021 15:28:27.862 +0100 WARN TcpOutputProc - Cooked connection to ip=ip_vs_indexer2:9997 timed out A tcptraceroute tells [open] between the forwarder and the VS but doesn't show me any more than that. Does this mean I have some network issue ? Do you have any suggestion ?   Thanks Ema      
Example: MyNameisKumar I want name=kumar from this ingested Data . Please help me with the solution 
Hi, I have a very specific problem. I have a field with following values at different timestamps. Example: 1,3,20 0 2,3,43,9,12 3,3,40,8,20,9,80 2,3,20,9,30 6,2,0,3,30,4,42,5,29,6,80,9,92   ... See more...
Hi, I have a very specific problem. I have a field with following values at different timestamps. Example: 1,3,20 0 2,3,43,9,12 3,3,40,8,20,9,80 2,3,20,9,30 6,2,0,3,30,4,42,5,29,6,80,9,92   This field actually represents very specific information, which I need to extract to feed my calculation. The first number shows us how many fields are there to be extracted. The second (and every other even number) is the name of the field to be extracted. The third (and every other odd number) is the value of the field, whose name is stated just before. That means that the last example I stated means that: There are six (6) fields to be extracted The key:value pairs are: 2:0 3:30 4:42 5:29 6:80 9:92 I want to be able to extract these fields, assigning them the approriate name. Is there a command / function that handles this well? Thanks in advance!
Hello I'm trying to injest event from this Microsoft event viewer: [WinEventLog://Microsoft-Windows-TerminalServices-ClientActiveXCore/Microsoft-Windows-TerminalServices-RDPClient/Operational] disa... See more...
Hello I'm trying to injest event from this Microsoft event viewer: [WinEventLog://Microsoft-Windows-TerminalServices-ClientActiveXCore/Microsoft-Windows-TerminalServices-RDPClient/Operational] disabled = 0 renderXml = 1 sourcetype = XmlWinEventLog index = ad   My issue is, that the name of  the event log the whole path is and not just "Operational" like the others.   Because of that I will get an error in Splunk: ERROR ExecProcessor [5076 ExecProcessor] - message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-winevtlog.exe"" splunk-winevtlog - WinEventMon::configure: Failed to find Event Log with channel name='Microsoft-Windows-TerminalServices-ClientActiveXCore/Microsoft-Windows-TerminalServices-RDPClient/Operational'   Is there a way to escape the "/" before Operational? Thank you very much in advice.
We are in the process of setting up comprehensive VPN Dashboards,We would like to enable alerting on these dashboards based on machine learning and standard deviation.. Can someone help me achieve th... See more...
We are in the process of setting up comprehensive VPN Dashboards,We would like to enable alerting on these dashboards based on machine learning and standard deviation.. Can someone help me achieve this /.
My database collector is set to use custom JDBC string: jdbc:oracle:thin:@(DESCRIPTION =     (ADDRESS_LIST =       (ADDRESS = (PROTOCOL = TCP)(HOST = 10.64.129.132)(PORT = 5350))       (ADDRESS =... See more...
My database collector is set to use custom JDBC string: jdbc:oracle:thin:@(DESCRIPTION =     (ADDRESS_LIST =       (ADDRESS = (PROTOCOL = TCP)(HOST = 10.64.129.132)(PORT = 5350))       (ADDRESS = (PROTOCOL = TCP)(HOST = 10.64.129.133)(PORT = 5350))     )     (CONNECT_DATA =       (SERVICE_NAME = SERVICE_NAME.XYZ.COM)     )   ) So I would expect, it will try to reach database on above IP addresses (db listener) BUT instead of that it tries to connect to oracle server VIPs where we have installed databases and ignoring listeners IPs Caused by: java.io.IOException: Connection timed out, socket connect lapse 127231 ms. server.xyz.com/10.64.50.184 5152 0 1 true                 at oracle.net.nt.TcpNTAdapter.connect(TcpNTAdapter.java:209)                 at oracle.net.nt.ConnOption.connect(ConnOption.java:161)                 at oracle.net.nt.ConnStrategy.execute(ConnStrategy.java:470)                 ... 29 more                 Caused by: java.io.IOException: Connection timed out, socket connect lapse 127231 ms. server.xyz.com/10.64.50.174 5152 0 1 true                 at oracle.net.nt.TcpNTAdapter.connect(TcpNTAdapter.java:209)                 at oracle.net.nt.ConnOption.connect(ConnOption.java:161)                 at oracle.net.nt.ConnStrategy.execute(ConnStrategy.java:470) Of course, FW allows communication only towards listener IPs and not to servers original IPs What to do? Thanks.
Basically the chart is showing blue & green lines, but user needs more distinguishing color. Like Red & Blue.  
In splunk dashboards we want to extract fields from _raw field, we achieved it by  extract pairdelim="{,}" kvdelim=":"  command and displayed the fields using table command.  Now we see events with ... See more...
In splunk dashboards we want to extract fields from _raw field, we achieved it by  extract pairdelim="{,}" kvdelim=":"  command and displayed the fields using table command.  Now we see events with more than 50k characters are skipped in the dashboard. Such events are spitted into 3 or more rows in the splunk logs view.  How to handle such events in the dashboard ? if _raw field can be truncated then which field should be referred for the original message. 
Hello guys, rb_ are replicated buckets of db_ - impacted by replication factor. However how to identify search factor footprint (specific index or bucket name?) on indexers? Thanks.  
Hi, I have this log format on our environment :  2021-12-03 03:28:04.296, EVENT_TIMESTAMP="2021-12-03 03:26:38.039962 Asia/Shanghai", ACTION_NAME="LOGON", AUDIT_TYPE="Standard", RETURN_CODE="1"... See more...
Hi, I have this log format on our environment :  2021-12-03 03:28:04.296, EVENT_TIMESTAMP="2021-12-03 03:26:38.039962 Asia/Shanghai", ACTION_NAME="LOGON", AUDIT_TYPE="Standard", RETURN_CODE="1", AUTHENTICATION_TYPE="(TYPE=(*));(CLIENT ADDRESS=((ADDRESS=(PROTOCOL=*)(HOST=1.1.1.1)(PORT=222))));", CURRENT_USER="my_own_user", DBID="0001111222", DBUSERNAME="my_own_user", INSTANCE_ID="1", OS_PROCESS="12000111", OS_USERNAME="ec2-user", SCN="900000000", SESSIONID="100000000", SYSTEM_PRIVILEGE_USED="CREATE SESSION", TERMINAL="unknown", UNIFIED_AUDIT_POLICIES="unknown", USERHOST="ec2-user", TS="2021-12-03 03:26:38" But it is missing the date_hour, date_mday, date_minute, date_month, date_second, date_wday, date_year, date_zone  fields  this is the PROPS.CONF: [audit_sample] ANNOTATE_PUNCT = false LINE_BREAKER = ([\r\n]+) SHOULD_LINEMERGE = false MAX_TIMESTAMP_LOOKAHEAD = 32 TIME_PREFIX = ^ TIME_FORMAT = %Y-%m-%d %H:%M:%S.%3N TRUNCATE = 2000 i have read the https://docs.splunk.com/Documentation/Splunk/8.2.3/Knowledge/Usedefaultfields and it says that the date_* fields are only available if the timestamp is properly extracted.  Which in my case is fine  because it have the _time field and when i compare the _time to the actual logs they are similar, and my props configuration is properly working. What might be the reason on why I'm missing those fields. It is not window_event_logs  
I am following the article https://www.splunk.com/en_us/blog/it/splunking-aws-ecs-part-2-sending-ecs-logs-to-splunk.html to enable splunk logging for ECS EC2 running a demo ASP.NET dotnet 5.0 weather... See more...
I am following the article https://www.splunk.com/en_us/blog/it/splunking-aws-ecs-part-2-sending-ecs-logs-to-splunk.html to enable splunk logging for ECS EC2 running a demo ASP.NET dotnet 5.0 weatherforecast webapi image. No logs appear in Splunk Cloud Trail version. When I look at the logs of the splunk/fluentd-hec:1.2.0 container I see the error  " failed to flush the buffer. retry_time=2 next_retry_seconds=2021-12-03 08:25:21 +0000 chunk="5d239a1fb8d1cc285dc139c24de689c5" error_class=SocketError error="Failed to open TCP connection to https:443 (getaddrinfo: Name or service not known)"
during login it shows login failed
Hello everyone, i have the following question. In my environment i have 3 different UF where a scripted input is working with the original servername to extract some data. Thi sscript is ins... See more...
Hello everyone, i have the following question. In my environment i have 3 different UF where a scripted input is working with the original servername to extract some data. Thi sscript is inside one app i deployed the UF, so there is only one inputs.conf working. What i need to do, is to rename the host name. I Know that i can do something with the transforms.conf and props.conf, but i dont know how to do this. example: Original Hostname Needed Hostname slc4E45 EMP slc4P49 PMP slc4L47 LMP   Maybe something like... host = eval(case(host=slc4E45, EMP, host=slc4P49, PMP, host=slc4L47, LMP)) inside the transforms.conf. Thank you for your help.
Hi, We noticed that spaces in license pool names are not escaped for some monitoring console license searches (historic data) For a pool name like : My Pool a license monitoring search will try  ... See more...
Hi, We noticed that spaces in license pool names are not escaped for some monitoring console license searches (historic data) For a pool name like : My Pool a license monitoring search will try  search (index=_internal host=myserver source=*license_usage.log* type="RolloverSummary" earliest=-30d@d pool=My Pool) | eval _time=('_time' - 43200) | bin _time span=1d | stats latest(b) AS b by slave, pool, _time | timechart span=1d sum(b) AS "volume" fixedrange=false | join type=outer _time [| search (index=_internal host=myserver source=*license_usage.log* type="RolloverSummary" earliest=-30d@d pool=My Pool) | eval _time=('_time' - 43200) | bin _time span=1d | stats latest(poolsz) AS "pool size" by _time] | fields - _timediff | foreach "*" [ eval <<FIELD>>=round('<<FIELD>>'/1024/1024/1024, 3) ] which produces no results because $pool_clause$ values are not in double quotes (pool="My Pool") or whitespace is not escaped (pool=My\ Pool) Can we modify this behavior somewhere ?
Below is the part of  log from which i need to extract data into tabular format in splunk dashboard. Payload:{\"comments\":[{\"isActive\":true,\"sendToGSR\":false,\"confidential\":false,\"profileId\... See more...
Below is the part of  log from which i need to extract data into tabular format in splunk dashboard. Payload:{\"comments\":[{\"isActive\":true,\"sendToGSR\":false,\"confidential\":false,\"profileId\":197229,\"profileCode\":null,\"commentId\":null,\"commentText\" "Value card from package was successfully issued but no Guest email was provided, please resend - N/A, Package code - PC0J0 , For amount - $476.0\",\"commentType\":null,\"commentLevelEnum\" "TC\",\"externalReferences\":[{\"referenceType\" "TC\",\"referenceValue\":1843667077}],\"auditDetails\":null}]}   Expected output: Package Status Package Status Please Resend Package code For amount Reference Value Value card from package was successfully issued but no Guest email was provided N/A PC0J0 $476.0 1843667077     My splunk query: i tried for 2 columns, its displaying rows but not able to load data into table.. index=*wdpr_syw* source="*stage*" "reservation-fulfillment" "comments*" "package" "POST" Logger="com.disney.service.ioc.rest.OutboundRestRequestInterceptor" "Payload*" "externalReferences*" "referenceValue*" | rex field=_raw "commentText*: (?<PackageStatus>.*?\d+)," | rex field=_raw "referenceValue*:(?<referenceValue>.*?\d+),"| table PackageStatus,referenceValue
Hello, How would I check the access role for indexes from web console layer? Any help will be highly appreciated. Thank you.