All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @whipstash , don't use join command that's a very slow command, use a different approach: index=INDEX sourcetype=sourcetypeA | rex field=eventID "\w{0,30}+.(?<sessionID>\d+)" | do some filter on... See more...
Hi @whipstash , don't use join command that's a very slow command, use a different approach: index=INDEX sourcetype=sourcetypeA | rex field=eventID "\w{0,30}+.(?<sessionID>\d+)" | do some filter on infoIWant fields here | append [ search index=INDEX sourcetype=sourcetypeB | stats count AS eventcount earliest(_time) AS earliest latest(_time) AS latest BY sessionID | eval duration=latest-earliest | where eventcount=2 | fields sessionID duration ] | stats values(eventID) AS eventID values(duration) AS duration values(count) AS count BY sessionID Please adapt this approach to your real situation. Ciao. Giuseppe
Wow it works. @gcusello you are super duper. Thanks!
Hi @myusufe71 , let me understand: you want to filter results of the main search with the results of the subsearch, is it correct? in this case, please, try this: index=abc [ | inputlookup test.cs... See more...
Hi @myusufe71 , let me understand: you want to filter results of the main search with the results of the subsearch, is it correct? in this case, please, try this: index=abc [ | inputlookup test.csv WHERE cluster="cluster1" | dedup host | fields host ] put only attention that the field to use as key (host) is the same in both main and sub search (it's case sensitive!). Ciao. Giuseppe
Hi @jg91 , if your csv doesn't contain any timestamp, Splunk can assign the timestamp of the index time or the timestamp from the previous event. probably it's assigned the second one. I hint to s... See more...
Hi @jg91 , if your csv doesn't contain any timestamp, Splunk can assign the timestamp of the index time or the timestamp from the previous event. probably it's assigned the second one. I hint to specify in props.conf that the timestamp is the current time: DATETIME_CONFIG = CURRENT as described at https://docs.splunk.com/Documentation/Splunk/9.3.1/Admin/Propsconf#Timestamp_extraction_configuration Ciao. Giuseppe
Hi @virgupta , Splunk is certified on every linux platform based on kernel 4.x and greater or 5.4, as you can see at https://docs.splunk.com/Documentation/Splunk/9.3.1/Installation/Systemrequirement... See more...
Hi @virgupta , Splunk is certified on every linux platform based on kernel 4.x and greater or 5.4, as you can see at https://docs.splunk.com/Documentation/Splunk/9.3.1/Installation/Systemrequirements or at https://www.splunk.com/en_us/download/splunk-enterprise.html Ciao. Giuseppe
Hi, I’m trying to ingest CSV data (without a timestamp) using a Universal Forwarder (UF) running in a fresh container. When I attempt to ingest the data, I encounter the following warning in the _i... See more...
Hi, I’m trying to ingest CSV data (without a timestamp) using a Universal Forwarder (UF) running in a fresh container. When I attempt to ingest the data, I encounter the following warning in the _internal index, and the data ends up being ingested with a timestamp from 2021. This container has not previously ingested any data, so I’m unsure why it defaults to this date. 10-18-2024 03:42:00.942 +0000 WARN DateParserVerbose [1571 structuredparsing] - Failed to parse timestamp in first MAX_TIMESTAMP_LOOKAHEAD (128) characters of event. Defaulting to timestamp of previous event (Wed Jan 13 21:06:54 2021). Context: source=/var/data/sample.csv|host=splunk-uf|csv|6215   Can someone explain why this date is being applied, and how I can prevent this issue?  
I have subquery result as host1 host2 host2 And I want to put this all host result as host=*  in the main query. 1. subquery | inputlookup test.csv | search cluster="cluster1" | stats count by ... See more...
I have subquery result as host1 host2 host2 And I want to put this all host result as host=*  in the main query. 1. subquery | inputlookup test.csv | search cluster="cluster1" | stats count by host | fields - count   2. main query using subquery, as index=abc host="*" host="*" is subquery result. Or is there any way to extract subquery result as host IN (host1, host2, host3) in main query?  
Great But maybe dashboard will not update variables/tokens until you manually change the picker. Let's say i choose "-5m" from picker and latest is "now" for default, it will remain fi... See more...
Great But maybe dashboard will not update variables/tokens until you manually change the picker. Let's say i choose "-5m" from picker and latest is "now" for default, it will remain fixed to relative_time(time(), $earliest$)  the UNIX-time value, also if my panels refreshes. So, letting dashboard has refreshing panels, the -5m will become -6 -7 -8 -9 -10 ......... untill you change the picker... Also for $latest$=="now", time() Same concept for earliest... it becomes fixed until you refresh entire dashboard/picker.
Hey @dstoev if CSV has a proper header and you have marked checkbox for Parse all files as CSV in the input configuration page.
Hello @PeaceHealthDan @aavyu20 , Can you try to add "trustServerCertificate=true" parameter to your JDBC URL and check how it goes? If that doesn’t work try to use MS-SQL Server using MS Generic Driv... See more...
Hello @PeaceHealthDan @aavyu20 , Can you try to add "trustServerCertificate=true" parameter to your JDBC URL and check how it goes? If that doesn’t work try to use MS-SQL Server using MS Generic Driver & MS Generic Driver with Windows Authentication
Had some basic queries - Can splunk be deployed as a CNF on Redhat OpenShift Cloud Platform ?? Also is the same deployment tested with Cisco UCS servers. Starting up with Splunk so any quick refere... See more...
Had some basic queries - Can splunk be deployed as a CNF on Redhat OpenShift Cloud Platform ?? Also is the same deployment tested with Cisco UCS servers. Starting up with Splunk so any quick references for the deployment selection & Scaling guide would be helpful.
SHC is perfectly in sync!!! I have no errors at all when all nodes are running, until i restart the SHC. I will try a node at a time, and i'll monitor the logs. It's only for curiosity, since SHC ... See more...
SHC is perfectly in sync!!! I have no errors at all when all nodes are running, until i restart the SHC. I will try a node at a time, and i'll monitor the logs. It's only for curiosity, since SHC works perfectly 🤷‍ 🤷‍ 🤷‍ IMO it's some kind of "artifact" remained from previously versions, over which i did updgrades (6 to 7 to 8 [where we changed the Indexers to new nodes/servers]). Quite sure resetting the raft and rebuilding the SCH will delete the "issue".
We are trying to modify vm.swappiness to 10 in/etc/sysctl.conf and are still observing. What is the appropriate size for allocating swaps to a single node in Splunk? We observe that most of them are u... See more...
We are trying to modify vm.swappiness to 10 in/etc/sysctl.conf and are still observing. What is the appropriate size for allocating swaps to a single node in Splunk? We observe that most of them are used more frequently by mongols
Thank you for your reply, 1. Splunk uses the Galaxy Kirin V10 SP2 X86 version for the three index nodes 2. The current resource utilization situation is: Swap parameter vm. swap=30 3. The status ... See more...
Thank you for your reply, 1. Splunk uses the Galaxy Kirin V10 SP2 X86 version for the three index nodes 2. The current resource utilization situation is: Swap parameter vm. swap=30 3. The status is that only 1.6GB of 64GB memory was used, but SWAP used nearly 4GB
I am having some issues getting this to work correctly. It does not return all the results. I have different records in different sourcetypes under the same index. sourcetypeA eventID = computer... See more...
I am having some issues getting this to work correctly. It does not return all the results. I have different records in different sourcetypes under the same index. sourcetypeA eventID = computerName.sessionID infoIWant1 = someinfo1 infoIWant2 = someinfo2   SourcetypeB's events are broken into events that I need to correlate. sourcetypeB event1------------------------------------------------------- sessionID= sessionNo1 direction=receive -----------------------------------------------------------------   event2-------------------------------------------------------- sessionID=sessionNo1 direction=send -----------------------------------------------------------------   I attempted the below search using the transaction command to correlate the records in sourcetypeB. index=INDEX sourcetype=sourcetypeA | rex field=eventID "\w{0,30}+.(?<sessionID>\d+)" | do some filter on infoIWant fields here | join type=inner sessionID [ search index=INDEX sourcetype=sourcetypeB | transaction sessionID | where eventcount==2 | fields sessionID duration ] | chart count by duration
Hello all, I recently discovered Linux capabilities, in particular the option CAP_DAC_READ_SEARCH from the AmbientCapabilities parameter in services, and realised it was actually implemented on Splu... See more...
Hello all, I recently discovered Linux capabilities, in particular the option CAP_DAC_READ_SEARCH from the AmbientCapabilities parameter in services, and realised it was actually implemented on Splunk UF 9.0+. I was happy to see this included in the service of UFs, but I then found it was not enabled by default on Splunk Enterprise (I was using 9.3.1), so I attempted to create an override for the service, including the aforementioned parameter. Unfortunately, I was unable to ingest logs for which the user running splunk did not have the permissions. Funnily enough, I tried to set some monitoring on /var/log/messages through the GUI; I was able to see the logs when selecting the sourcetype, but then I got an error "Parameter name: Path is not readable" when submitting the conf. I also get an insufficient permission message in the internal logs when forcing the monitoring of /var/log/messages via an inputs.conf. I read on an older post that this behaviour comes from the use of an inappropriate function when checking the permissions on the file... So my questions to the community and Splunk employees are : Are capabilities in services supported for Splunk Enterprise? If so, how can I set them up? If not, will they be supported at some point? How would you collect logs on a HF or standalone instance, where the user running splunk has no rights on the logs to ingest? Thanks
Hi PickleRick. Thank you for your advice. Certainly my server's access is blocked. PS C:\> Test-NetConnection 127.0.0.1 -port 8000 ComputerName : 127.0.0.1 RemoteAddress : 127.0.0.1 RemotePor... See more...
Hi PickleRick. Thank you for your advice. Certainly my server's access is blocked. PS C:\> Test-NetConnection 127.0.0.1 -port 8000 ComputerName : 127.0.0.1 RemoteAddress : 127.0.0.1 RemotePort : 8000 InterfaceAlias : Loopback Pseudo-Interface 1 SourceAddress : 127.0.0.1 TcpTestSucceeded : True   PS C:\> Test-NetConnection 192.168.0.8 -port 8000 警告: TCP connect to (192.168.0.8 : 8000) failed ComputerName : 192.168.0.8 RemoteAddress : 192.168.0.8 RemotePort : 8000 InterfaceAlias : Ethernet0 SourceAddress : 192.168.0.8 PingSucceeded : True PingReplyDetails (RTT) : 0 ms TcpTestSucceeded : False Then I tried to desable Windows Firewall. But I cannot access at host IP address. PS C:\> Get-NetFirewallProfile Name : Domain Enabled : False DefaultInboundAction : NotConfigured DefaultOutboundAction : NotConfigured AllowInboundRules : NotConfigured AllowLocalFirewallRules : NotConfigured AllowLocalIPsecRules : NotConfigured AllowUserApps : NotConfigured AllowUserPorts : NotConfigured AllowUnicastResponseToMulticast : NotConfigured NotifyOnListen : True EnableStealthModeForIPsec : NotConfigured LogFileName : %systemroot%\system32\LogFiles\Firewall\pfirewall.log LogMaxSizeKilobytes : 4096 LogAllowed : False LogBlocked : False LogIgnored : NotConfigured DisabledInterfaceAliases : {NotConfigured} Name : Private Enabled : False DefaultInboundAction : NotConfigured DefaultOutboundAction : NotConfigured AllowInboundRules : NotConfigured AllowLocalFirewallRules : NotConfigured AllowLocalIPsecRules : NotConfigured AllowUserApps : NotConfigured AllowUserPorts : NotConfigured AllowUnicastResponseToMulticast : NotConfigured NotifyOnListen : False EnableStealthModeForIPsec : NotConfigured LogFileName : %systemroot%\system32\LogFiles\Firewall\pfirewall.log LogMaxSizeKilobytes : 4096 LogAllowed : False LogBlocked : False LogIgnored : NotConfigured DisabledInterfaceAliases : {NotConfigured} Name : Public Enabled : False DefaultInboundAction : NotConfigured DefaultOutboundAction : NotConfigured AllowInboundRules : NotConfigured AllowLocalFirewallRules : NotConfigured AllowLocalIPsecRules : NotConfigured AllowUserApps : NotConfigured AllowUserPorts : NotConfigured AllowUnicastResponseToMulticast : NotConfigured NotifyOnListen : False EnableStealthModeForIPsec : NotConfigured LogFileName : %systemroot%\system32\LogFiles\Firewall\pfirewall.log LogMaxSizeKilobytes : 4096 LogAllowed : False LogBlocked : False LogIgnored : NotConfigured DisabledInterfaceAliases : {NotConfigured} Do you think other reasons?
Pete, I guess I don't quite understand what you're trying to do. Are you trying to develop a Splunk app? If so, you should have your own Splunk instance with relevant components to replicate th... See more...
Pete, I guess I don't quite understand what you're trying to do. Are you trying to develop a Splunk app? If so, you should have your own Splunk instance with relevant components to replicate the data. Is this data going back into the same Splunk instance or are you trying to get the Splunk data into some external non-SplunkDB? I guess, I'm also asking for how this is architected and what data is being moved from where to where (including ports and data types, etc)?  Also per Splunkbase: What can DB Connect do? Database import - Splunk DB Connect allows you to import tables, rows, and columns from a database directly into Splunk Enterprise, which indexes the data. You can then analyze and visualize that relational data from within Splunk Enterprise just as you would the rest of your Splunk Enterprise data. Database export - DB Connect also enables you to output data from Splunk Enterprise back to your relational database. You map the Splunk Enterprise fields to the database tables you want to write to. Database lookups - DB Connect also performs database lookups, which let you reference fields in an external database that match fields in your event data. Using these matches, you can add more meaningful information and searchable fields to enrich your event data. Database access - DB Connect also allows you to directly use SQL in your Splunk searches and dashboards. Using these commands, you can make useful mashups of structured data with machine data.  
Thank you so much Victor!!
  Hi John, We are not using Splunk directly, but we are developing a tool for a customer who does. The customer has data sources connected to Splunk, and they want to forward the audit logs from th... See more...
  Hi John, We are not using Splunk directly, but we are developing a tool for a customer who does. The customer has data sources connected to Splunk, and they want to forward the audit logs from these data sources using the Universal Forwarder. We're trying to figure out how to forward these audit logs to Logstash, either via TCP or Filebeat. Do you have any suggestions on how to achieve this? I’ve managed to receive syslog data on my local laptop using the Universal Forwarder with the following outputs.conf:     [tcpout] [tcpout:fastlane] server = 127.0.0.1:514 sendCookedData = false      I’ve also connected MySQL to Splunk using DB Connect to test it out, but I’m not receiving any MySQL logs.