All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, I am trying to fetch splunk events that are created in last 30days for below query, by selecting time range as last 30days. But i am getting all time events itseems for this query. Please su... See more...
Hi, I am trying to fetch splunk events that are created in last 30days for below query, by selecting time range as last 30days. But i am getting all time events itseems for this query. Please suggest Query used: index=servicenow eventtype=snow_change* sourcetype="snow:change_request" (change_state_name="Work Complete" OR change_state_name=Closed) earliest=-30d@d | dedup number | eval diff=strptime(dv_work_end,"%Y-%m-%d %H:%M:%S")-strptime(dv_work_start,"%Y-%m-%d %H:%M:%S") | eval Downtime=round((diff/60),3) | table number Downtime host dv_work_start dv_work_end SPlunk Evets o/p: Complete 1,285 events (1/28/20 12:00:00.000 AM to 2/27/20 5:30:31.555 PM) No Event Sampling Job Smart Mode Events Patterns Statistics (1,285) Visualization 100 Per Page Format Preview Prev1...3456789...Next number Downtime host dv_work_start dv_work_end number Downtime host dv_work_start dv_work_end CHG0129357 300.000 kmci4odw2023 2020-01-19 21:00:00 2020-01-20 02:00:00 CHG0129566 120.000 kmci4odw2023 2020-01-19 23:30:00 2020-01-20 01:30:00 CHG0129494 99.250 kmci4odw2023 2020-01-19 23:48:54 2020-01-20 01:28:09 CHG0129795 4320.367 kmci4odw2023 2020-01-20 10:55:10 2020-01-23 10:55:32 CHG0129116 1110.000 kmci4odw2023 2020-01-20 13:00:00 2020-01-21 07:30:00 CHG0129536 1380.000 kmci4odw2023 2020-01-20 13:30:00 2020-01-21 12:30:00 CHG0129632 88.250 kmci4odw2023 2020-01-20 15:05:04 2020-01-20 16:33:19 CHG0129634 120.000 kmci4odw2023 2020-01-20 16:15:00 2020-01-20 18:15:00 CHG0129585 120.000 kmci4odw2023 2020-01-20 17:00:00 2020-01-20 19:00:00 CHG0129389 155.100 kmci4odw2023 2020-01-20 22:30:25 2020-01-21 01:05:31 CHG0129593 0.000 kmci4odw2023 2020-01-20 23:30:00 2020-01-20 23:30:00 CHG0129647 90.667 kmci4odw2023 2020-01-21 04:30:00 2020-01-21 06:00:40 CHG0129323 1440.000 kmci4odw2023 2020-01-21 07:00:00 2020-01-22 07:00:00 CHG0128642 60.000 kmci4odw2023 2020-01-21 09:00:00 2020-01-21 10:00:00 CHG0129555 151.300 kmci4odw2023 2020-01-21 09:00:25 2020-01-21 11:31:43 CHG0128772 90.000 kmci4odw2023 2020-01-21 09:30:00 2020-01-21 11:00:00 CHG0129613 1440.000 kmci4odw2023 2020-01-21 09:30:00 2020-01-22 09:30:00 CHG0129234 1440.000 kmci4odw2023 2020-01-21 09:30:00 2020-01-22 09:30:00 CHG0129955 10080.000 kmci4odw2023 2020-01-21 09:55:51 2020-01-28 09:55:51 CHG0129650 57.800 kmci4odw2023 2020-01-21 10:00:00 2020-01-21 10:57:48 CHG0128646 120.000 kmci4odw2023 2020-01-21 10:00:00 2020-01-21 12:00:00 CHG0129667 1230.000 kmci4odw2023 2020-01-21 13:00:00 2020-01-22 09:30:00 CHG0128650 3120.000 kmci4odw2023 2020-01-21 13:00:00 2020-01-23 17:00:00 CHG0129676 120.000 kmci4odw2023 2020-01-21 13:15:00 2020-01-21 15:15:00 CHG0129461 119.500 kmci4odw2023 2020-01-21 13:30:30 2020-01-21 15:30:00 CHG0129446 60.000 kmci4odw2023 2020-01-21 16:00:00 2020-01-21 17:00:00 CHG0129292 50.000 kmci4odw2023 2020-01-21 17:00:00 2020-01-21 17:50:00 CHG0129679 35.000 kmci4odw2023 2020-01-21 17:20:00 2020-01-21 17:55:00 CHG0129709 420.000 kmci4odw2023 2020-01-21 19:00:00 2020-01-22 02:00:00 CHG0129526 167.917 kmci4odw2023 2020-01-21 21:00:00 2020-01-21 23:47:55 CHG0129677 180.000 kmci4odw2023 2020-01-21 21:30:00 2020-01-22 00:30:00 CHG0129646 40.183 kmci4odw2023 2020-01-21 23:35:37 2020-01-22 00:15:48 CHG0129567 296.883 kmci4odw2023 2020-01-22 00:25:57 2020-01-22 05:22:50 CHG0129417 1450.000 kmci4odw2023 2020-01-22 07:00:00 2020-01-23 07:10:00 CHG0129295 10.000 kmci4odw2023 2020-01-22 07:00:00 2020-01-22 07:10:00
I am trying to set a token when someone uses a hyperlink in a dashboard, in theory using the details below as part of the href should work but it is not, does anyone have any thoughts on how to get i... See more...
I am trying to set a token when someone uses a hyperlink in a dashboard, in theory using the details below as part of the href should work but it is not, does anyone have any thoughts on how to get it to work? data-set-token="token" data-value="new token value" I am on version 7.3. Many thanks for your time.
Is it possible to adjust the height of table rows on the dashboard, in order to facilitate a smaller font and thus fitting more on screen? I've tried setting via css but it only let me sets the colum... See more...
Is it possible to adjust the height of table rows on the dashboard, in order to facilitate a smaller font and thus fitting more on screen? I've tried setting via css but it only let me sets the column width and even that is subjective. Many thanks in advance.
Hello, We have a source ABC sending us logs and are being stored inside an index called all_logs. From that source, we want to separate the events, which contains the field SiteUrl: https://www.ab... See more...
Hello, We have a source ABC sending us logs and are being stored inside an index called all_logs. From that source, we want to separate the events, which contains the field SiteUrl: https://www.abc.com/site/PathologyPHI and send these logs to a new index called new_logs, while the rest of the logs should still go to the Index all_logs. This is what I've tried so far, but all the logs still land up in the Index all_logs, instead of being routed.. FYI, props and transforms are kept under $SPLUNK_HOME/etc/system/local, while the inputs.conf is under the TA, which brings the logs's local directory. Had to do this because in props, there is a setting defined for another sourcetype as well. Any suggestions regarding moving them back to TA as well will be appreciated. In props.conf, I made the following change: [abc:management:activity] TRANSFORMS-routing = abc_logs In Transforms.conf, I did this: [abc_logs] REGEX = SiteUrl:.+PathologyPHI DEST_KEY = _MetaData:Index FORMAT = new_logs Here is the inputs.conf for the source. There are other inputs too, but this inputs comes with the events, which we want to route. We do not want to route all the events coming off this input, only the ones matching the regex: [splunk_ta_abc_management_activity://Auditabc] content_type = Audit.abc index = all_logs sourcetype = abc:management:activity interval = 120 tenant_name = ABC disabled = 0 start_by_shell = false Any help will be highly appreciated. Thank you
For example, i see Regex to change source type from pan:log to pan:threat as, REGEX = ^[^,]+,[^,]+,[^,]+,THREAT, However, the exact piece this regex is seeking for in raw data i am receiving is... See more...
For example, i see Regex to change source type from pan:log to pan:threat as, REGEX = ^[^,]+,[^,]+,[^,]+,THREAT, However, the exact piece this regex is seeking for in raw data i am receiving is ,2020/02/21 22:09:08,,TRAFFIC, With two comma in roll the add on is not doing any sourcetype change then... I am receiving this logs from Cortex, not sure whether it could do something with it? Thanks a lot
Hi, I have an index with 7 sourcetypes. For a particular reason, I had to delete the index. Made a refresh, then I re-created it. But when I checked if all my sourcetypes are there, I found only ... See more...
Hi, I have an index with 7 sourcetypes. For a particular reason, I had to delete the index. Made a refresh, then I re-created it. But when I checked if all my sourcetypes are there, I found only 6. One of them haven't been added by splunk. I don't understad why? Plus, I've created a new index with a new sourcetype. But after restarting the splunk service, nothing is showing up in the Web GUI when I made a research on this new index. I don't understand what is happening. Did someone has the same issue? And know how to resolve it? Thank you.
splunk event time and timestamp on log file is not matching. Our log file has below entry for timestamp "2020-02-20 10:14:59.363" But that time and splunk time not matching. How can I fix it?... See more...
splunk event time and timestamp on log file is not matching. Our log file has below entry for timestamp "2020-02-20 10:14:59.363" But that time and splunk time not matching. How can I fix it? Below is the the sourcetype is set, [name] SHOULD_LINEMERGE = True BREAK_ONLY_BEFORE = Id- SHOULD_LINEMERGE=false
I just set up the inputs for AAD using the Microsoft Azure Add-on for Splunk. One of the inputs I set up is for the AAD users, which what I assume it does is pulling all the info about AD users once ... See more...
I just set up the inputs for AAD using the Microsoft Azure Add-on for Splunk. One of the inputs I set up is for the AAD users, which what I assume it does is pulling all the info about AD users once and then only pulls new users info or changes to existing users. However when I enabled this input, CPU usage on the IDM skyrocketed to 99% and the add-on was pulling all the user profiles again and again so I had to disable the input. I get no errors at all in the _internal index, I only get some INFO events every second or so saying that proxy is not enabled, which is expected since I'm not going to be using proxy. Is this expected behaviour? Did I set up something incorrectly? When creating the input I only filled in the mandatory options and set the interval to 30s. Even when raising the interval it still pulls the same data again and I end up with duplicates. I can't find anyone else having a similar issue, so I'm contemplating just setting the interval for something like once per day or more and just live with the duplicate data. It does seem strange though that the add-on works like that. EDIT: Additionally to what I've said already, when I run the below search I only see checkpoints for the rest of my inputs but I don't see anything for the AAD user input. | inputlookup TA_Azure_checkpoint_lookup | eval key = _key
Hi all, i have used csv lookup file to csv files to map the values . Can i use json file instead of csv file to map the values from the csv lookup file?
Dear supports, I use the OEM license, which means you can only usr the specific fixed prefix sourcetype "psc_". I uploaded a csv file with default sourcetype, which I should use the sourcet... See more...
Dear supports, I use the OEM license, which means you can only usr the specific fixed prefix sourcetype "psc_". I uploaded a csv file with default sourcetype, which I should use the sourcetype with prefix psc_. After I uploaded this file, I got the Orphaned indexer alerts (1), the Message TEXT: This slave indexed data/sourcetype(s) without a corresponding license pool The Category is "orphan_slave". I have delete all the csv data upload with default sourcetype. and delete sourcetype csv. But the alert still here. How can I fix this alert? Thanks! Bright
Hi, I am trying to pull event logs from remote machines using universal forwarders. I have done the configuration in the inputs.conf files. below is the configuration in my inputs.conf file. [Win... See more...
Hi, I am trying to pull event logs from remote machines using universal forwarders. I have done the configuration in the inputs.conf files. below is the configuration in my inputs.conf file. [WinEventLog://Application] disabled = 0 index = win_events crcSalt = SOURCE [WinEventLog://Security] disabled = 0 index = win_events crcSalt = SOURCE [WinEventLog://System] disabled = 0 index = win_events crcSalt = SOURCE [WinEventLog://Setup] disabled = 0 index = win_events crcSalt = SOURCE Now I dont want all event codes from the logs. I would require only 4800 and 4801. is there any way in which only the events related to the two events can be forwarded to an index. Thanks
WARN UTF8Processor - Using charset UTF-8, as the monitor is believed over the raw text which may be UTF-16LE - data_source="/var/log/XXX.log", data_host="xxx", data_sourcetype="config" tooo many s... See more...
WARN UTF8Processor - Using charset UTF-8, as the monitor is believed over the raw text which may be UTF-16LE - data_source="/var/log/XXX.log", data_host="xxx", data_sourcetype="config" tooo many sourcetypes make this warnning message i know, need to change sourcetype in props.conf Using this, [config] CHARSET = UTF-16LE But i dont wanna do that just... I want Splunk to solve it automatically. So, i found a solution [config] CHARSET = AUTO However, the message still occurs How can i remove Right way utf8processor warnning msg?
I need to get the logs which are older than 90days in splunk but our retention policy is 90days only. So, If it is possible get, kindly guide me
It never showed failed sites, turns out the original search is: | search status="Failure" | stats count While our Status Overview showed sites being in state "Failed" instead of "Failure". Ea... See more...
It never showed failed sites, turns out the original search is: | search status="Failure" | stats count While our Status Overview showed sites being in state "Failed" instead of "Failure". Easy fix...
For context, I attempted to install the Splunk App for Unix and Linux, and when I uploaded the .tgz file to the indexer, I was greeted with an error which I unfortunately didn't save. Attempted Tro... See more...
For context, I attempted to install the Splunk App for Unix and Linux, and when I uploaded the .tgz file to the indexer, I was greeted with an error which I unfortunately didn't save. Attempted Troubleshooting: 1. Curling the website before visiting on a browser is successful, though returns an untrusted certificate error (running Splunk over TLS and so this is normal for my deployment). 2. After visiting the web interface on either Chrome or Firefox, in and out of incognito, the browser attempts connection for a while and then says "unable to find web page" 3. Any subsequent curls from the indexer machine result in refused connections. 4. Restarting the splunk daemon resets the entire sequence where curls are successful and the web interface runs until I visit it in a browser. 5. The Splunk deployment was fully operational before I attempted to install the Splunk App for Unix and Linux which rules out any machine, and networking problems. Environment Information: 1. Splunk Enterprise on a dev license 2. Indexer runs in Centos 8 VM on a ESXI 6.5 host. Questions related to problem: 1. Has anyone experienced this problem before? 2. What log files can I look into to get a better idea as to what the problem is?
Hi, I am using below query to fetch change request events created in last 30days...but when i seletc time range i am getting alltime events itseems. Can anyone suggest how to get events only cr... See more...
Hi, I am using below query to fetch change request events created in last 30days...but when i seletc time range i am getting alltime events itseems. Can anyone suggest how to get events only created in previous month or specific time period. Query Used: index=servicenow eventtype=snow_change* sourcetype="snow:change_request" (change_state_name="Work Complete" OR change_state_name=Closed) | dedup number | eval diff=strptime(dv_work_end,"%Y-%m-%d %H:%M:%S")-strptime(dv_work_start,"%Y-%m-%d %H:%M:%S") | eval Downtime=round((diff/60),3) | table number Downtime host dv_work_start dv_work_end Events shown: 100 Per Page Format Preview Prev1...3456789...Next number Downtime host dv_work_start dv_work_end CHG0129357 300.000 kmci4odw2023 2020-01-19 21:00:00 2020-01-20 02:00:00 CHG0129566 120.000 kmci4odw2023 2020-01-19 23:30:00 2020-01-20 01:30:00 CHG0129494 99.250 kmci4odw2023 2020-01-19 23:48:54 2020-01-20 01:28:09 CHG0129795 4320.367 kmci4odw2023 2020-01-20 10:55:10 2020-01-23 10:55:32 CHG0129116 1110.000 kmci4odw2023 2020-01-20 13:00:00 2020-01-21 07:30:00
Hello, I have been working on breaking events which come from the Splunk Rest api addon output. Default "_json" source_type is considering the entire api response as a single event. My aim is to g... See more...
Hello, I have been working on breaking events which come from the Splunk Rest api addon output. Default "_json" source_type is considering the entire api response as a single event. My aim is to get individual events for objects come under "results". I have tested some custom source types using props.conf but none of them seems working. Please help me with the props.conf entries. Required individual event example. NB : The fields in the event is dynamic { "created" : "2020-02-27T06:14:34Z", "eventTypeName" : "EVENT1", "groupId" : "xxx", "id" : "xxx", "isGlobalAdmin" : false, "links" : [ { "href" : "https://example.com/api/v2", "rel" : "self" } ] } I am pasting the entire API response below : Code below : { "links" : [ { "href" : "https://example.com/api/v2", "rel" : "self" } ], "results" : [ { "created" : "2020-02-27T06:14:34Z", "eventTypeName" : "EVENT1", "groupId" : "xxxxx", "id" : "xxxx", "isGlobalAdmin" : false, "links" : [ { "href" : "https://example.com/api/v2", "rel" : "self" } ] }, { "clusterName" : "splunk-cluster", "created" : "2020-02-27T06:14:33Z", "eventTypeName" : "EVENT2", "groupId" : "xxxxx", "id" : "xxxxx", "isGlobalAdmin" : false, "links" : [ { "href" : "https://example.com/api/v2", "rel" : "self" } ] }, { "created" : "2020-02-27T06:14:32Z", "eventTypeName" : "EVENT3", "groupId" : "xxxxxx", "id" : "xxxxxx", "isGlobalAdmin" : false, "links" : [ { "href" : "https://example.com/api/v2", "rel" : "self" } ], "remoteAddress" : "xx.xx.xx.xx", "userId" : "xxxxxx", "username" : "Sam-test" } ], "totalCount" : 3 }
i have a dev and prod setup. We cannot have UF agent installed on splunk infra servers , as splunk does not support it. so we have setup a way to collect capacity/cpu/mem data just like uf agent ... See more...
i have a dev and prod setup. We cannot have UF agent installed on splunk infra servers , as splunk does not support it. so we have setup a way to collect capacity/cpu/mem data just like uf agent for our splunk servers. now we have production server data in the production indexers and dev server data on dev indexers. but we are showing it on a report that is there on production. so we have a situation to send the dev indexers data to production indexers( index=test) for showing the capacity data for development also on production report. what is the best way to send selective index (index=test) from dev indexer to production indexers( index=test) so that our production report can see both the data.
I need to install SSL certificate on splunk Index cluster master web. I created the csr and key on the server and got the certificate from the CA. the certificate provided is in .CRT format. I then g... See more...
I need to install SSL certificate on splunk Index cluster master web. I created the csr and key on the server and got the certificate from the CA. the certificate provided is in .CRT format. I then got the .PEM format using the below. cat server_name.csr server_name.key ca_provided_certificate.com.crt > certificate.com.pem Have updated the web.config file in the local with the below settings . [settings] enableSplunkWebSSL = true serverCert = /application/splunk/etc/auth/splunkweb/certificate.com.pem
 After saving, when i try restarting the splunk . It stucks starting the web with the below error Waiting for web server at https://10.0.1.1:8000 to be available... can someone please help
お世話になります。 以下のようなデータがあります。 issue.id,Key 1111 2222 null 3333 issue.idがNUllの場合Keyの値をissue.idに代入したいのですが、どのようにすればよろしいでしょうか。