All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @jg91 , if your csv doesn't contain any timestamp, Splunk can assign the timestamp of the index time or the timestamp from the previous event. probably it's assigned the second one. I hint to s... See more...
Hi @jg91 , if your csv doesn't contain any timestamp, Splunk can assign the timestamp of the index time or the timestamp from the previous event. probably it's assigned the second one. I hint to specify in props.conf that the timestamp is the current time: DATETIME_CONFIG = CURRENT as described at https://docs.splunk.com/Documentation/Splunk/9.3.1/Admin/Propsconf#Timestamp_extraction_configuration Ciao. Giuseppe
Hi @virgupta , Splunk is certified on every linux platform based on kernel 4.x and greater or 5.4, as you can see at https://docs.splunk.com/Documentation/Splunk/9.3.1/Installation/Systemrequirement... See more...
Hi @virgupta , Splunk is certified on every linux platform based on kernel 4.x and greater or 5.4, as you can see at https://docs.splunk.com/Documentation/Splunk/9.3.1/Installation/Systemrequirements or at https://www.splunk.com/en_us/download/splunk-enterprise.html Ciao. Giuseppe
Hi, I’m trying to ingest CSV data (without a timestamp) using a Universal Forwarder (UF) running in a fresh container. When I attempt to ingest the data, I encounter the following warning in the _i... See more...
Hi, I’m trying to ingest CSV data (without a timestamp) using a Universal Forwarder (UF) running in a fresh container. When I attempt to ingest the data, I encounter the following warning in the _internal index, and the data ends up being ingested with a timestamp from 2021. This container has not previously ingested any data, so I’m unsure why it defaults to this date. 10-18-2024 03:42:00.942 +0000 WARN DateParserVerbose [1571 structuredparsing] - Failed to parse timestamp in first MAX_TIMESTAMP_LOOKAHEAD (128) characters of event. Defaulting to timestamp of previous event (Wed Jan 13 21:06:54 2021). Context: source=/var/data/sample.csv|host=splunk-uf|csv|6215   Can someone explain why this date is being applied, and how I can prevent this issue?  
I have subquery result as host1 host2 host2 And I want to put this all host result as host=*  in the main query. 1. subquery | inputlookup test.csv | search cluster="cluster1" | stats count by ... See more...
I have subquery result as host1 host2 host2 And I want to put this all host result as host=*  in the main query. 1. subquery | inputlookup test.csv | search cluster="cluster1" | stats count by host | fields - count   2. main query using subquery, as index=abc host="*" host="*" is subquery result. Or is there any way to extract subquery result as host IN (host1, host2, host3) in main query?  
Great But maybe dashboard will not update variables/tokens until you manually change the picker. Let's say i choose "-5m" from picker and latest is "now" for default, it will remain fi... See more...
Great But maybe dashboard will not update variables/tokens until you manually change the picker. Let's say i choose "-5m" from picker and latest is "now" for default, it will remain fixed to relative_time(time(), $earliest$)  the UNIX-time value, also if my panels refreshes. So, letting dashboard has refreshing panels, the -5m will become -6 -7 -8 -9 -10 ......... untill you change the picker... Also for $latest$=="now", time() Same concept for earliest... it becomes fixed until you refresh entire dashboard/picker.
Hey @dstoev if CSV has a proper header and you have marked checkbox for Parse all files as CSV in the input configuration page.
Hello @PeaceHealthDan @aavyu20 , Can you try to add "trustServerCertificate=true" parameter to your JDBC URL and check how it goes? If that doesn’t work try to use MS-SQL Server using MS Generic Driv... See more...
Hello @PeaceHealthDan @aavyu20 , Can you try to add "trustServerCertificate=true" parameter to your JDBC URL and check how it goes? If that doesn’t work try to use MS-SQL Server using MS Generic Driver & MS Generic Driver with Windows Authentication
Had some basic queries - Can splunk be deployed as a CNF on Redhat OpenShift Cloud Platform ?? Also is the same deployment tested with Cisco UCS servers. Starting up with Splunk so any quick refere... See more...
Had some basic queries - Can splunk be deployed as a CNF on Redhat OpenShift Cloud Platform ?? Also is the same deployment tested with Cisco UCS servers. Starting up with Splunk so any quick references for the deployment selection & Scaling guide would be helpful.
SHC is perfectly in sync!!! I have no errors at all when all nodes are running, until i restart the SHC. I will try a node at a time, and i'll monitor the logs. It's only for curiosity, since SHC ... See more...
SHC is perfectly in sync!!! I have no errors at all when all nodes are running, until i restart the SHC. I will try a node at a time, and i'll monitor the logs. It's only for curiosity, since SHC works perfectly 🤷‍ 🤷‍ 🤷‍ IMO it's some kind of "artifact" remained from previously versions, over which i did updgrades (6 to 7 to 8 [where we changed the Indexers to new nodes/servers]). Quite sure resetting the raft and rebuilding the SCH will delete the "issue".
We are trying to modify vm.swappiness to 10 in/etc/sysctl.conf and are still observing. What is the appropriate size for allocating swaps to a single node in Splunk? We observe that most of them are u... See more...
We are trying to modify vm.swappiness to 10 in/etc/sysctl.conf and are still observing. What is the appropriate size for allocating swaps to a single node in Splunk? We observe that most of them are used more frequently by mongols
Thank you for your reply, 1. Splunk uses the Galaxy Kirin V10 SP2 X86 version for the three index nodes 2. The current resource utilization situation is: Swap parameter vm. swap=30 3. The status ... See more...
Thank you for your reply, 1. Splunk uses the Galaxy Kirin V10 SP2 X86 version for the three index nodes 2. The current resource utilization situation is: Swap parameter vm. swap=30 3. The status is that only 1.6GB of 64GB memory was used, but SWAP used nearly 4GB
I am having some issues getting this to work correctly. It does not return all the results. I have different records in different sourcetypes under the same index. sourcetypeA eventID = computer... See more...
I am having some issues getting this to work correctly. It does not return all the results. I have different records in different sourcetypes under the same index. sourcetypeA eventID = computerName.sessionID infoIWant1 = someinfo1 infoIWant2 = someinfo2   SourcetypeB's events are broken into events that I need to correlate. sourcetypeB event1------------------------------------------------------- sessionID= sessionNo1 direction=receive -----------------------------------------------------------------   event2-------------------------------------------------------- sessionID=sessionNo1 direction=send -----------------------------------------------------------------   I attempted the below search using the transaction command to correlate the records in sourcetypeB. index=INDEX sourcetype=sourcetypeA | rex field=eventID "\w{0,30}+.(?<sessionID>\d+)" | do some filter on infoIWant fields here | join type=inner sessionID [ search index=INDEX sourcetype=sourcetypeB | transaction sessionID | where eventcount==2 | fields sessionID duration ] | chart count by duration
Hello all, I recently discovered Linux capabilities, in particular the option CAP_DAC_READ_SEARCH from the AmbientCapabilities parameter in services, and realised it was actually implemented on Splu... See more...
Hello all, I recently discovered Linux capabilities, in particular the option CAP_DAC_READ_SEARCH from the AmbientCapabilities parameter in services, and realised it was actually implemented on Splunk UF 9.0+. I was happy to see this included in the service of UFs, but I then found it was not enabled by default on Splunk Enterprise (I was using 9.3.1), so I attempted to create an override for the service, including the aforementioned parameter. Unfortunately, I was unable to ingest logs for which the user running splunk did not have the permissions. Funnily enough, I tried to set some monitoring on /var/log/messages through the GUI; I was able to see the logs when selecting the sourcetype, but then I got an error "Parameter name: Path is not readable" when submitting the conf. I also get an insufficient permission message in the internal logs when forcing the monitoring of /var/log/messages via an inputs.conf. I read on an older post that this behaviour comes from the use of an inappropriate function when checking the permissions on the file... So my questions to the community and Splunk employees are : Are capabilities in services supported for Splunk Enterprise? If so, how can I set them up? If not, will they be supported at some point? How would you collect logs on a HF or standalone instance, where the user running splunk has no rights on the logs to ingest? Thanks
Hi PickleRick. Thank you for your advice. Certainly my server's access is blocked. PS C:\> Test-NetConnection 127.0.0.1 -port 8000 ComputerName : 127.0.0.1 RemoteAddress : 127.0.0.1 RemotePor... See more...
Hi PickleRick. Thank you for your advice. Certainly my server's access is blocked. PS C:\> Test-NetConnection 127.0.0.1 -port 8000 ComputerName : 127.0.0.1 RemoteAddress : 127.0.0.1 RemotePort : 8000 InterfaceAlias : Loopback Pseudo-Interface 1 SourceAddress : 127.0.0.1 TcpTestSucceeded : True   PS C:\> Test-NetConnection 192.168.0.8 -port 8000 警告: TCP connect to (192.168.0.8 : 8000) failed ComputerName : 192.168.0.8 RemoteAddress : 192.168.0.8 RemotePort : 8000 InterfaceAlias : Ethernet0 SourceAddress : 192.168.0.8 PingSucceeded : True PingReplyDetails (RTT) : 0 ms TcpTestSucceeded : False Then I tried to desable Windows Firewall. But I cannot access at host IP address. PS C:\> Get-NetFirewallProfile Name : Domain Enabled : False DefaultInboundAction : NotConfigured DefaultOutboundAction : NotConfigured AllowInboundRules : NotConfigured AllowLocalFirewallRules : NotConfigured AllowLocalIPsecRules : NotConfigured AllowUserApps : NotConfigured AllowUserPorts : NotConfigured AllowUnicastResponseToMulticast : NotConfigured NotifyOnListen : True EnableStealthModeForIPsec : NotConfigured LogFileName : %systemroot%\system32\LogFiles\Firewall\pfirewall.log LogMaxSizeKilobytes : 4096 LogAllowed : False LogBlocked : False LogIgnored : NotConfigured DisabledInterfaceAliases : {NotConfigured} Name : Private Enabled : False DefaultInboundAction : NotConfigured DefaultOutboundAction : NotConfigured AllowInboundRules : NotConfigured AllowLocalFirewallRules : NotConfigured AllowLocalIPsecRules : NotConfigured AllowUserApps : NotConfigured AllowUserPorts : NotConfigured AllowUnicastResponseToMulticast : NotConfigured NotifyOnListen : False EnableStealthModeForIPsec : NotConfigured LogFileName : %systemroot%\system32\LogFiles\Firewall\pfirewall.log LogMaxSizeKilobytes : 4096 LogAllowed : False LogBlocked : False LogIgnored : NotConfigured DisabledInterfaceAliases : {NotConfigured} Name : Public Enabled : False DefaultInboundAction : NotConfigured DefaultOutboundAction : NotConfigured AllowInboundRules : NotConfigured AllowLocalFirewallRules : NotConfigured AllowLocalIPsecRules : NotConfigured AllowUserApps : NotConfigured AllowUserPorts : NotConfigured AllowUnicastResponseToMulticast : NotConfigured NotifyOnListen : False EnableStealthModeForIPsec : NotConfigured LogFileName : %systemroot%\system32\LogFiles\Firewall\pfirewall.log LogMaxSizeKilobytes : 4096 LogAllowed : False LogBlocked : False LogIgnored : NotConfigured DisabledInterfaceAliases : {NotConfigured} Do you think other reasons?
Pete, I guess I don't quite understand what you're trying to do. Are you trying to develop a Splunk app? If so, you should have your own Splunk instance with relevant components to replicate th... See more...
Pete, I guess I don't quite understand what you're trying to do. Are you trying to develop a Splunk app? If so, you should have your own Splunk instance with relevant components to replicate the data. Is this data going back into the same Splunk instance or are you trying to get the Splunk data into some external non-SplunkDB? I guess, I'm also asking for how this is architected and what data is being moved from where to where (including ports and data types, etc)?  Also per Splunkbase: What can DB Connect do? Database import - Splunk DB Connect allows you to import tables, rows, and columns from a database directly into Splunk Enterprise, which indexes the data. You can then analyze and visualize that relational data from within Splunk Enterprise just as you would the rest of your Splunk Enterprise data. Database export - DB Connect also enables you to output data from Splunk Enterprise back to your relational database. You map the Splunk Enterprise fields to the database tables you want to write to. Database lookups - DB Connect also performs database lookups, which let you reference fields in an external database that match fields in your event data. Using these matches, you can add more meaningful information and searchable fields to enrich your event data. Database access - DB Connect also allows you to directly use SQL in your Splunk searches and dashboards. Using these commands, you can make useful mashups of structured data with machine data.  
Thank you so much Victor!!
  Hi John, We are not using Splunk directly, but we are developing a tool for a customer who does. The customer has data sources connected to Splunk, and they want to forward the audit logs from th... See more...
  Hi John, We are not using Splunk directly, but we are developing a tool for a customer who does. The customer has data sources connected to Splunk, and they want to forward the audit logs from these data sources using the Universal Forwarder. We're trying to figure out how to forward these audit logs to Logstash, either via TCP or Filebeat. Do you have any suggestions on how to achieve this? I’ve managed to receive syslog data on my local laptop using the Universal Forwarder with the following outputs.conf:     [tcpout] [tcpout:fastlane] server = 127.0.0.1:514 sendCookedData = false      I’ve also connected MySQL to Splunk using DB Connect to test it out, but I’m not receiving any MySQL logs.
If you look at the sceenshot... my goal is to get the values between the 10 and 20, but ignore / delete the outlier with -9,4 (faulty value from sensor), which is to see on the absolute min max graph... See more...
If you look at the sceenshot... my goal is to get the values between the 10 and 20, but ignore / delete the outlier with -9,4 (faulty value from sensor), which is to see on the absolute min max graph... in the timechart you see that as litte edge... The sensor puts every minute a value 24/7. With "timechart" curve you wont see the small number of outlier by 1440 values, but in the "stats" min max per day, you will see extreme values like the -9,4, which is absolutly unlogical by a minimum average from ~10. In order to know which of the Min Max of the day is wrong / outlier, I came up with the idea of ​​verifying the values ​​using the timechart Min Max, that was my idea... I hope it is understandable
Hey @tread_splunk , Not gonna lie, it seems a bit confuse to understand your goal here. Both actions search and GET_PASSWORD only resides in _audit index, while internal index will have other kind ... See more...
Hey @tread_splunk , Not gonna lie, it seems a bit confuse to understand your goal here. Both actions search and GET_PASSWORD only resides in _audit index, while internal index will have other kind of information. IF what you want is just use the internal logs to get the source clientip for that user (not exactly related to the action calls though) you can try something like this: index=_audit (action=search OR action=GET_PASSWORD) | stats count as audit_count by user | join user [ search index=_internal sourcetype=splunkd_access user=* clientip=* | stats count as internal_count by user clientip] | table user clientip audit_count internal_count The counts on audit and internal is the part that doesn't make much sense to me unless you want to filter the URI in the internal logs to something that is triggered during action=search or action=GET_PASSWORD, so you can customize my query a bit more. If I'm tripping, please help me understanding your goal so I can try to give you more insights if any.
Will do. Thanks for your speedy response!