All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I'm using a rex to extract a field called field1 from my search... how do I take all the results of field1 and call out if they match on case or not? ie _time abc_123 _time ABC_123 _time ... See more...
I'm using a rex to extract a field called field1 from my search... how do I take all the results of field1 and call out if they match on case or not? ie _time abc_123 _time ABC_123 _time def_123 _time def_123 first example I'd want to say there's a case diff while the second example is fine since the case's match
I have an eval condition as below which is working good. | eval Project=if(app=="abc_def_123", "XYZ", "ZXT") Now If I have given a wild card as shown below it's not working. How can I apply ... See more...
I have an eval condition as below which is working good. | eval Project=if(app=="abc_def_123", "XYZ", "ZXT") Now If I have given a wild card as shown below it's not working. How can I apply the wild card as shown below and get the required results? | eval Project=if(app=="abc_*", "XYZ", "ZXT")
I'm monitoring hosts files on Windows machines, but I don't want the comment lines when I ingest the file. However, my SEDCMD never seems to prevent the comment lines from being indexed. My prop... See more...
I'm monitoring hosts files on Windows machines, but I don't want the comment lines when I ingest the file. However, my SEDCMD never seems to prevent the comment lines from being indexed. My props.conf : [source::C:\\Windows\\System32\\drivers\\etc\\hosts] CHECK_METHOD = entire_md5 SEDCMD-comments = s/\#.*\n//g A sample of the standard hosts file. In this example, I only want the last line in my event, 255.255.255.255 wpad : # Copyright (c) 1993-2009 Microsoft Corp. # # This is a sample HOSTS file used by Microsoft TCP/IP for Windows. # # This file contains the mappings of IP addresses to host names. Each # entry should be kept on an individual line. The IP address should # be placed in the first column followed by the corresponding host name. # The IP address and the host name should be separated by at least one # space. # # Additionally, comments (such as these) may be inserted on individual # lines or following the machine name denoted by a '#' symbol. # # For example: # # 102.54.94.97 rhino.acme.com # source server # 38.25.63.10 x.acme.com # x client host # localhost name resolution is handled within DNS itself. # 127.0.0.1 localhost # ::1 localhost 255.255.255.255 wpad Any advice on where my SEDCMD is wrong? The command seems to work fine in a search when I run | rex mode=sed "s/\#.*\n//g"
Hi, Do you have plans to release a newer version of fireeye app that is compatible with splunk 8.0? Is it something on your radar? Thanks
Check out this run-anywhere example dashboard XML. You might have to adjust your browser's zoom but on my laptop, on all zooms, it is clipping both the label of the trellis and the value . How... See more...
Check out this run-anywhere example dashboard XML. You might have to adjust your browser's zoom but on my laptop, on all zooms, it is clipping both the label of the trellis and the value . How can I fix this? <dashboard> <label>Trellis Scaling Example</label> <row> <panel> <single> <search> <query>| makeresults | eval _raw="aShortName bThisIsMediumName cThis_Name_Is_Somewhat_Long dThis_Is_An_Absolutely_Absurdly_Long_Stinking_Name 24D 15H 0M 16D 4H 5M 13D 17H 45M 0D 0H 0M" | multikv forceheader=1 | fields - _time linecount _raw | eval foo="bar" | untable foo asset count | xyseries foo asset count | fields - foo</query> <earliest>-24h</earliest> <latest>now</latest> </search> <option name="trellis.enabled">1</option> </single> </panel> </row> </dashboard>
Hi all, I have 10 events containing events from events 1,event2,event 3,....event 10. I need to cobine events2,3,4 and events 7,8 into one event each i.e one event for 2,3,4 and one event for 7,8... See more...
Hi all, I have 10 events containing events from events 1,event2,event 3,....event 10. I need to cobine events2,3,4 and events 7,8 into one event each i.e one event for 2,3,4 and one event for 7,8. Rest all i dont need to index(send to null queque) How can we do this on index time. Please help
Hello, i want to build a dashboard with different panels to Analyse some Alarms. At the Top i have 2 Input fields field1(text input) and field2(dropdown for status) I want to habe a panel or a... See more...
Hello, i want to build a dashboard with different panels to Analyse some Alarms. At the Top i have 2 Input fields field1(text input) and field2(dropdown for status) I want to habe a panel or an hidden query where i build with | makeresult some Values and write them into my index. Like an Kommentary Funktion for Analysis. My Problem is now that if i use this query in normal Search app: | makeresults | eval event_hash=123 | eval kommentar=abc | eval wann = now() | table event_hast kommentar wann it works perfectly.... But if im going to Save this as a dashboard Panel and look at my Dashboard, it say that no Results get Reflected. Anyone know This issue? Thanks for your Help
We are building some deployments and have noticed that when we rebuild a cluster in k8s that the "server.conf" in particular the serverName gets set to example indexer1- and we would like it to be ju... See more...
We are building some deployments and have noticed that when we rebuild a cluster in k8s that the "server.conf" in particular the serverName gets set to example indexer1- and we would like it to be just indexer1. Is there a value like the following that we could use to set it? env: - name: SPLUNK_HOME value: /opt/splunk - name: SPLUNK_START_ARGS value: "--accept-license" - name: SPLUNK_ROLE value: splunk_indexer - name: SPLUNK_INDEXER_URL value: indexer1,indexer2,indexer3,indexer4,indexer5,indexer6,indexer7,indexer8,indexer9 - name: SPLUNK_SEARCH_HEAD_URL value: search1,search2,search3 - name: SPLUNK_PASSWORD value: helloworld - name: DEBUG value: "true"
Hi All, Could we use Splunk to monitor the CPU health of Palo Alto Firewalls?, there is below add-on which is Present in Splunk Base https://splunkbase.splunk.com/app/3732/ but which is unoffic... See more...
Hi All, Could we use Splunk to monitor the CPU health of Palo Alto Firewalls?, there is below add-on which is Present in Splunk Base https://splunkbase.splunk.com/app/3732/ but which is unofficial and not supported by Splunk, whether someone used this and got the desired result or can anyone suggest any other method from which we can monitor CPU health of Palo Alto Firewalls
There is a requirement in which i need to display total count and errors(in total count). error message is in raw text.
I want to balance the use of cache capacity with SmartStore. I want to keep recent buckets in cache while allowing older buckets to be expired so I can search with the S3 object store. Based on wh... See more...
I want to balance the use of cache capacity with SmartStore. I want to keep recent buckets in cache while allowing older buckets to be expired so I can search with the S3 object store. Based on what I read in... https://docs.splunk.com/Documentation/Splunk/8.0.2/Indexer/ConfigureSmartStorecachemanager I believe setting "hotlist_recency_secs" and "hotlist_bloom_filter_recency_hours" would allow me to accomplish what I seek. i.e. protect buckets processed within the last 7 days and use remaining cache capacity for buckets retrieved from S3. Can someone confirm my logic or point me in the right direction? thx -v
Hi All, I have proper timestamp logs in Splunk. I am able to extract time for all the searches except one. index =mtp | stats count by Activity user when i need count for these two fields, i... See more...
Hi All, I have proper timestamp logs in Splunk. I am able to extract time for all the searches except one. index =mtp | stats count by Activity user when i need count for these two fields, i am getting the result but not Time. Can someone please suggest.
Hi, we are running several scheduled PS Scripts, somethimes data is missing and we found the following error in the splunk-powershell.ps1 log [scriptname] failed with exception=An item with the ... See more...
Hi, we are running several scheduled PS Scripts, somethimes data is missing and we found the following error in the splunk-powershell.ps1 log [scriptname] failed with exception=An item with the same key has already been added. at the next scheduled time the script is running without error. Some times later the error appears again. Any Ideas? Thanks Alex
Hello, For internal control, we have to monitor all deactivations and all suppressions of correlation searches. Unfortunately, we were not able to find a corresponding log event in _audit index... See more...
Hello, For internal control, we have to monitor all deactivations and all suppressions of correlation searches. Unfortunately, we were not able to find a corresponding log event in _audit index. However, all needed information could be find with the search below: | rest splunk_server=local count=0 /servicesNS/-/SplunkEnterpriseSecuritySuite/saved/searches | where match('action.correlationsearch.enabled', "1|[Tt]|[Tt][Rr][Uu][Ee]") | rename action.correlationsearch.label as "Name" | table Name disabled The result should look like this: Name | disabled Outbreak Detected | 0 SQL Injection Detected | 0 Threat Activity Detected | 1 Etc. The question is how we can detect two conditions below: when deactivated field changes its value from 0 to 1 when one of Name fields values is not returned anymore Do you have an idea how those searches could be implemented? Thanks for the help.
I have some problems with configuring rows in the event log collection list. For now, we use default Splunk data for display log lists, but I need to configure more rows than 7. [timeStamp=2020-0... See more...
I have some problems with configuring rows in the event log collection list. For now, we use default Splunk data for display log lists, but I need to configure more rows than 7. [timeStamp=2020-03-24 14:43:42.612 +0000] [thread=ForkJoinPool-1-worker-115] [logLevel=INFO] [eventType=xxxxxxxxxxxxBetxxxxxmentxxxxxxder] - className=UpstreamLoggingHelper, methodName=loggingUpstreamError, message= requestId="xxxxxxxxxxxxxxxx" sessionToken="xxxxxxxxxxxxxxxx" userId="xxxxx" ref="xxxxxxxxxxxxx xxxxxxxxxxx" event="xxxxxxxxxxxxxx" eventDirection="Response" latency="256" httpCode="422" UPSTREAM ERROR= { 1. "statusCode": "PurchaseNotAccepted", 2."statusDescription": "PurchaseNotAccepted", 3."response": { 4."id": "", 5."status": "Declined", 6."creationDate": "2020-03-24T14:43:42.1297069Z", 7."bets": [ Show all 61 lines Who knows where I can do these changes.
Will the Splunk VMWare TA's run with Splunk running in FIPS mode?
Scenario: I am using a script to poll cloud API for data at frequent intervals. The data is stored in archived *.csv.gz files and a UF installed on the same server is configured to monitor the fold... See more...
Scenario: I am using a script to poll cloud API for data at frequent intervals. The data is stored in archived *.csv.gz files and a UF installed on the same server is configured to monitor the folder: inputs.conf [monitor:///apps/splunk/data] sourcetype = data:1 index = data_1 _TCP_ROUTING = primary_indexers_site_1 The problem is that data only get ingested after a restart of the UF Splunk service on the host, and then almost immediately stops. Meaning I have to restart the UF every time I want to get new/current data. The script does not appear to be the issue because it is constantly pulling new data into the folder as expected. Anyone seen this before?
I installed the Fortinet FortiGate App 1.5.1 for Splunk as well as the Fortinet FortiGate Add-On 1.6.2 for Splunk and configured the sourcetype in the props.conf file. After that I restarted the ... See more...
I installed the Fortinet FortiGate App 1.5.1 for Splunk as well as the Fortinet FortiGate Add-On 1.6.2 for Splunk and configured the sourcetype in the props.conf file. After that I restarted the Splunk service. When I open the Fortinet FortiGate App and go to the Fortinet Network Security Overview I have nice dashboards with data. However the dashboards such as Traffic and VPN are all emtpy, even though when I open the according Searches and Reports I have data. Do I need to do something else to get the other dashboards working? I use Splunk 7.3.0.
I have an URL to fetching a data which i hit through browser, the data is appear in tabular format. how to fetch the data and store it in splunk.
hello, we are planning to change the Splunk login ID which is linked with AD, the change is due to the existing ID contains the National identity number and we are Chang to the new login ID withou... See more...
hello, we are planning to change the Splunk login ID which is linked with AD, the change is due to the existing ID contains the National identity number and we are Chang to the new login ID without the national identity number, i want to make sure this change will not cause any major impact. will the change in ID cause any impact, such as the dashboards or reporting functions. what can be done before the login id change.