All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Had some basic queries - Can splunk be deployed as a CNF on Redhat OpenShift Cloud Platform ?? Also is the same deployment tested with Cisco UCS servers. Starting up with Splunk so any quick refere... See more...
Had some basic queries - Can splunk be deployed as a CNF on Redhat OpenShift Cloud Platform ?? Also is the same deployment tested with Cisco UCS servers. Starting up with Splunk so any quick references for the deployment selection & Scaling guide would be helpful.
SHC is perfectly in sync!!! I have no errors at all when all nodes are running, until i restart the SHC. I will try a node at a time, and i'll monitor the logs. It's only for curiosity, since SHC ... See more...
SHC is perfectly in sync!!! I have no errors at all when all nodes are running, until i restart the SHC. I will try a node at a time, and i'll monitor the logs. It's only for curiosity, since SHC works perfectly 🤷‍ 🤷‍ 🤷‍ IMO it's some kind of "artifact" remained from previously versions, over which i did updgrades (6 to 7 to 8 [where we changed the Indexers to new nodes/servers]). Quite sure resetting the raft and rebuilding the SCH will delete the "issue".
We are trying to modify vm.swappiness to 10 in/etc/sysctl.conf and are still observing. What is the appropriate size for allocating swaps to a single node in Splunk? We observe that most of them are u... See more...
We are trying to modify vm.swappiness to 10 in/etc/sysctl.conf and are still observing. What is the appropriate size for allocating swaps to a single node in Splunk? We observe that most of them are used more frequently by mongols
Thank you for your reply, 1. Splunk uses the Galaxy Kirin V10 SP2 X86 version for the three index nodes 2. The current resource utilization situation is: Swap parameter vm. swap=30 3. The status ... See more...
Thank you for your reply, 1. Splunk uses the Galaxy Kirin V10 SP2 X86 version for the three index nodes 2. The current resource utilization situation is: Swap parameter vm. swap=30 3. The status is that only 1.6GB of 64GB memory was used, but SWAP used nearly 4GB
I am having some issues getting this to work correctly. It does not return all the results. I have different records in different sourcetypes under the same index. sourcetypeA eventID = computer... See more...
I am having some issues getting this to work correctly. It does not return all the results. I have different records in different sourcetypes under the same index. sourcetypeA eventID = computerName.sessionID infoIWant1 = someinfo1 infoIWant2 = someinfo2   SourcetypeB's events are broken into events that I need to correlate. sourcetypeB event1------------------------------------------------------- sessionID= sessionNo1 direction=receive -----------------------------------------------------------------   event2-------------------------------------------------------- sessionID=sessionNo1 direction=send -----------------------------------------------------------------   I attempted the below search using the transaction command to correlate the records in sourcetypeB. index=INDEX sourcetype=sourcetypeA | rex field=eventID "\w{0,30}+.(?<sessionID>\d+)" | do some filter on infoIWant fields here | join type=inner sessionID [ search index=INDEX sourcetype=sourcetypeB | transaction sessionID | where eventcount==2 | fields sessionID duration ] | chart count by duration
Hello all, I recently discovered Linux capabilities, in particular the option CAP_DAC_READ_SEARCH from the AmbientCapabilities parameter in services, and realised it was actually implemented on Splu... See more...
Hello all, I recently discovered Linux capabilities, in particular the option CAP_DAC_READ_SEARCH from the AmbientCapabilities parameter in services, and realised it was actually implemented on Splunk UF 9.0+. I was happy to see this included in the service of UFs, but I then found it was not enabled by default on Splunk Enterprise (I was using 9.3.1), so I attempted to create an override for the service, including the aforementioned parameter. Unfortunately, I was unable to ingest logs for which the user running splunk did not have the permissions. Funnily enough, I tried to set some monitoring on /var/log/messages through the GUI; I was able to see the logs when selecting the sourcetype, but then I got an error "Parameter name: Path is not readable" when submitting the conf. I also get an insufficient permission message in the internal logs when forcing the monitoring of /var/log/messages via an inputs.conf. I read on an older post that this behaviour comes from the use of an inappropriate function when checking the permissions on the file... So my questions to the community and Splunk employees are : Are capabilities in services supported for Splunk Enterprise? If so, how can I set them up? If not, will they be supported at some point? How would you collect logs on a HF or standalone instance, where the user running splunk has no rights on the logs to ingest? Thanks
Hi PickleRick. Thank you for your advice. Certainly my server's access is blocked. PS C:\> Test-NetConnection 127.0.0.1 -port 8000 ComputerName : 127.0.0.1 RemoteAddress : 127.0.0.1 RemotePor... See more...
Hi PickleRick. Thank you for your advice. Certainly my server's access is blocked. PS C:\> Test-NetConnection 127.0.0.1 -port 8000 ComputerName : 127.0.0.1 RemoteAddress : 127.0.0.1 RemotePort : 8000 InterfaceAlias : Loopback Pseudo-Interface 1 SourceAddress : 127.0.0.1 TcpTestSucceeded : True   PS C:\> Test-NetConnection 192.168.0.8 -port 8000 警告: TCP connect to (192.168.0.8 : 8000) failed ComputerName : 192.168.0.8 RemoteAddress : 192.168.0.8 RemotePort : 8000 InterfaceAlias : Ethernet0 SourceAddress : 192.168.0.8 PingSucceeded : True PingReplyDetails (RTT) : 0 ms TcpTestSucceeded : False Then I tried to desable Windows Firewall. But I cannot access at host IP address. PS C:\> Get-NetFirewallProfile Name : Domain Enabled : False DefaultInboundAction : NotConfigured DefaultOutboundAction : NotConfigured AllowInboundRules : NotConfigured AllowLocalFirewallRules : NotConfigured AllowLocalIPsecRules : NotConfigured AllowUserApps : NotConfigured AllowUserPorts : NotConfigured AllowUnicastResponseToMulticast : NotConfigured NotifyOnListen : True EnableStealthModeForIPsec : NotConfigured LogFileName : %systemroot%\system32\LogFiles\Firewall\pfirewall.log LogMaxSizeKilobytes : 4096 LogAllowed : False LogBlocked : False LogIgnored : NotConfigured DisabledInterfaceAliases : {NotConfigured} Name : Private Enabled : False DefaultInboundAction : NotConfigured DefaultOutboundAction : NotConfigured AllowInboundRules : NotConfigured AllowLocalFirewallRules : NotConfigured AllowLocalIPsecRules : NotConfigured AllowUserApps : NotConfigured AllowUserPorts : NotConfigured AllowUnicastResponseToMulticast : NotConfigured NotifyOnListen : False EnableStealthModeForIPsec : NotConfigured LogFileName : %systemroot%\system32\LogFiles\Firewall\pfirewall.log LogMaxSizeKilobytes : 4096 LogAllowed : False LogBlocked : False LogIgnored : NotConfigured DisabledInterfaceAliases : {NotConfigured} Name : Public Enabled : False DefaultInboundAction : NotConfigured DefaultOutboundAction : NotConfigured AllowInboundRules : NotConfigured AllowLocalFirewallRules : NotConfigured AllowLocalIPsecRules : NotConfigured AllowUserApps : NotConfigured AllowUserPorts : NotConfigured AllowUnicastResponseToMulticast : NotConfigured NotifyOnListen : False EnableStealthModeForIPsec : NotConfigured LogFileName : %systemroot%\system32\LogFiles\Firewall\pfirewall.log LogMaxSizeKilobytes : 4096 LogAllowed : False LogBlocked : False LogIgnored : NotConfigured DisabledInterfaceAliases : {NotConfigured} Do you think other reasons?
Pete, I guess I don't quite understand what you're trying to do. Are you trying to develop a Splunk app? If so, you should have your own Splunk instance with relevant components to replicate th... See more...
Pete, I guess I don't quite understand what you're trying to do. Are you trying to develop a Splunk app? If so, you should have your own Splunk instance with relevant components to replicate the data. Is this data going back into the same Splunk instance or are you trying to get the Splunk data into some external non-SplunkDB? I guess, I'm also asking for how this is architected and what data is being moved from where to where (including ports and data types, etc)?  Also per Splunkbase: What can DB Connect do? Database import - Splunk DB Connect allows you to import tables, rows, and columns from a database directly into Splunk Enterprise, which indexes the data. You can then analyze and visualize that relational data from within Splunk Enterprise just as you would the rest of your Splunk Enterprise data. Database export - DB Connect also enables you to output data from Splunk Enterprise back to your relational database. You map the Splunk Enterprise fields to the database tables you want to write to. Database lookups - DB Connect also performs database lookups, which let you reference fields in an external database that match fields in your event data. Using these matches, you can add more meaningful information and searchable fields to enrich your event data. Database access - DB Connect also allows you to directly use SQL in your Splunk searches and dashboards. Using these commands, you can make useful mashups of structured data with machine data.  
Thank you so much Victor!!
  Hi John, We are not using Splunk directly, but we are developing a tool for a customer who does. The customer has data sources connected to Splunk, and they want to forward the audit logs from th... See more...
  Hi John, We are not using Splunk directly, but we are developing a tool for a customer who does. The customer has data sources connected to Splunk, and they want to forward the audit logs from these data sources using the Universal Forwarder. We're trying to figure out how to forward these audit logs to Logstash, either via TCP or Filebeat. Do you have any suggestions on how to achieve this? I’ve managed to receive syslog data on my local laptop using the Universal Forwarder with the following outputs.conf:     [tcpout] [tcpout:fastlane] server = 127.0.0.1:514 sendCookedData = false      I’ve also connected MySQL to Splunk using DB Connect to test it out, but I’m not receiving any MySQL logs.
If you look at the sceenshot... my goal is to get the values between the 10 and 20, but ignore / delete the outlier with -9,4 (faulty value from sensor), which is to see on the absolute min max graph... See more...
If you look at the sceenshot... my goal is to get the values between the 10 and 20, but ignore / delete the outlier with -9,4 (faulty value from sensor), which is to see on the absolute min max graph... in the timechart you see that as litte edge... The sensor puts every minute a value 24/7. With "timechart" curve you wont see the small number of outlier by 1440 values, but in the "stats" min max per day, you will see extreme values like the -9,4, which is absolutly unlogical by a minimum average from ~10. In order to know which of the Min Max of the day is wrong / outlier, I came up with the idea of ​​verifying the values ​​using the timechart Min Max, that was my idea... I hope it is understandable
Hey @tread_splunk , Not gonna lie, it seems a bit confuse to understand your goal here. Both actions search and GET_PASSWORD only resides in _audit index, while internal index will have other kind ... See more...
Hey @tread_splunk , Not gonna lie, it seems a bit confuse to understand your goal here. Both actions search and GET_PASSWORD only resides in _audit index, while internal index will have other kind of information. IF what you want is just use the internal logs to get the source clientip for that user (not exactly related to the action calls though) you can try something like this: index=_audit (action=search OR action=GET_PASSWORD) | stats count as audit_count by user | join user [ search index=_internal sourcetype=splunkd_access user=* clientip=* | stats count as internal_count by user clientip] | table user clientip audit_count internal_count The counts on audit and internal is the part that doesn't make much sense to me unless you want to filter the URI in the internal logs to something that is triggered during action=search or action=GET_PASSWORD, so you can customize my query a bit more. If I'm tripping, please help me understanding your goal so I can try to give you more insights if any.
Will do. Thanks for your speedy response! 
Hi Ravi, The strategy is to configure your multivalue input with using prefix, sufix and delimiter, like this: <fieldset submitButton="false"> <input type="multiselect" token="field1"> <label>fie... See more...
Hi Ravi, The strategy is to configure your multivalue input with using prefix, sufix and delimiter, like this: <fieldset submitButton="false"> <input type="multiselect" token="field1"> <label>field1</label> <choice value="value1">value1</choice> <choice value="value2">value2</choice> <choice value="value3">value3</choice> <delimiter>,</delimiter> <prefix>| stats sum(</prefix> <suffix>)</suffix> </input> </fieldset> Via UI it will look like this:   So in your search you'll simply do: |makeresults $field1$   where |makeresults is your actual search. The token will append | sum(<selected_values>)   Then you can add the proper grouping fields and etc to make that happen as you want it to. (like add the "as Total by Jobname" after the ")" suffix
Perhaps an irrelevant question, but did you engage your local infrastructure or security team; or whomever is managing Splunk?   If your org does not have a proper process for making these sorts of... See more...
Perhaps an irrelevant question, but did you engage your local infrastructure or security team; or whomever is managing Splunk?   If your org does not have a proper process for making these sorts of adjustments because you only have access to a UF, there may be larger challenges to tackle. Every Splunk deployment is different, so providing more specific advice is harder when you cannot spell out all of the cans and cant dos, since at least some of the possible solutions mean that you have proper administrative access to all of your relevant Splunk components, of which a HF is very much an important piece. That aside, you should be working with your Splunk admins to get the right configurations in place.
I have a multi select drop down menu with field names as values.    When i one or mone values from the drop down menu, those fields/columns need to totaled. I tried the below code as sugged by Me... See more...
I have a multi select drop down menu with field names as values.    When i one or mone values from the drop down menu, those fields/columns need to totaled. I tried the below code as sugged by Meta AI..But it is not producing any result. Please help me <dashboard> <label>Sum Selected Fields</label> <row> <panel> <input type="dropdown" token="selected_fields"> <label>Select Fields</label> <choice value="field1">Field 1</choice> <choice value="field2">Field 2</choice> <choice value="field3">Field 3</choice> </input> <chart> <search> | eval sum_fields="$selected_fields$" | stats sum(eval(split(sum_fields, ","))) as Total by Jobname </search> </chart> </panel> </row> </dashboard>
I have the following fabricated search which is a pretty close representation of what I actually want to do and gives me the results I want... (index=_audit (action=search OR action=GET_PASSWORD)) O... See more...
I have the following fabricated search which is a pretty close representation of what I actually want to do and gives me the results I want... (index=_audit (action=search OR action=GET_PASSWORD)) OR (index=_internal [ search index=_audit (action=search OR action=GET_PASSWORD) | dedup user | table user] ) | stats count(eval(index="_audit")) as count, values(clientip) as clientip,count(eval(index="_internal")) as internalCount by user i.e for everyone who has performed a search or GET_PASSWORD in one index, I want to know something about them gathered from both indexes.  I can't get past the feeling that I shouldn't need to repeat the "index=_audit (action=search OR action=GET_PASSWORD)" search, which in the actual search is whole lot of SPL, so duplicating it makes things untidy.  Macros aside, can anyone come up with a more elegant solution?
Hi @BKDRockz , I undertand that in this way you don't consume license but using dbxquery in searches isn't the best approach to extract data from a database because the db-connect is a very slow ext... See more...
Hi @BKDRockz , I undertand that in this way you don't consume license but using dbxquery in searches isn't the best approach to extract data from a database because the db-connect is a very slow extracting tool. The best approach is to extract data separately using both the queries saving results in an index and then using the indexed data for a search. In addition don't use join because it's a very slow command: you can dind in Community many examples of correlation searches. I hint to redesign your ingestion and search process. Ciao. Giuseppe
Hi @JoshuaJJ , check if yu have network issues in communication between Indexers and License Master. Otherwise, open a case to Splunk Support. Remember to download and send them a diag from the ma... See more...
Hi @JoshuaJJ , check if yu have network issues in communication between Indexers and License Master. Otherwise, open a case to Splunk Support. Remember to download and send them a diag from the machine that's sending the message and from an indexer. Ciao. Giuseppe
The way I handle this is to have a script run as a cron job from a maintenance server that queries the process scheduler tables in the database and writes out what I want to see about process schedul... See more...
The way I handle this is to have a script run as a cron job from a maintenance server that queries the process scheduler tables in the database and writes out what I want to see about process scheduler jobs as key=value pairs into a text file and have a Splunk UF monitor the text file. Ugly but it works.