All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, I have a field called sequence_anomalies which consists of a lot of individual elements. Once I made it into a table, it looks like below. Now I am wondering if it's possible to make a table vie... See more...
Hi, I have a field called sequence_anomalies which consists of a lot of individual elements. Once I made it into a table, it looks like below. Now I am wondering if it's possible to make a table view where each row only contains one element instead of all elements being in one row.     Thank you in advance! Regards,  
I'm have a search that pulls in user login info with lat and lon. I'm trying to calculate the distance between two cordinates for the same user name. If there isn't a match on username, I want it to ... See more...
I'm have a search that pulls in user login info with lat and lon. I'm trying to calculate the distance between two cordinates for the same user name. If there isn't a match on username, I want it to move to the next match and then output the distance between the two with the login time. 
Hello, I would like to save/dump to a (text or binary) file the network metadata that is forwarded from an endpoint. Does Splunk Stream support such a feature? Checked the documentation and didn't... See more...
Hello, I would like to save/dump to a (text or binary) file the network metadata that is forwarded from an endpoint. Does Splunk Stream support such a feature? Checked the documentation and didn't find anything related. Thank you!
At the beginning of February this year we started ingesting events from Autosys prod and non-prd servers.  All was going well until the end of the month.  At 23:59 on 28th Feb, events stopped appeari... See more...
At the beginning of February this year we started ingesting events from Autosys prod and non-prd servers.  All was going well until the end of the month.  At 23:59 on 28th Feb, events stopped appearing in the index to which they are being ingested.  A few days later they started again at midnight.  None of the missed events were collected.  The log files for Autosys roll over at midnight so I would not expect events during the missing time period to be collected as the files would have been renamed. This has happened every month since.  In March the events started coming in on 3rd, April it was 4th, May on  5th and June they appeared again on the 6th.  The only pattern I can see is that events reappear on the day of the month that equals the number of the month - ie 3/3, 4/4, 5/5 and 6/6.  If that is the case, then events will not appear in July until 7th. How can this be?  I can see no _internal messages that help. Can someone shed some light on this?  Thanks
Do we know the reason why Splunk search has below behaviour:   Search-1:   | makeresults | eval group_by_field="A", other_field_1="1", other_field_2="test1" | append [| makeresults | eval group_b... See more...
Do we know the reason why Splunk search has below behaviour:   Search-1:   | makeresults | eval group_by_field="A", other_field_1="1", other_field_2="test1" | append [| makeresults | eval group_by_field="A", other_field_1="2", other_field_2="test2"] | join type=left group_by_field [| makeresults| eval group_by_field="A", inventory_field="upperA~~characterA" | makemv inventory_field delim="~~"] | search inventory_field="upperA"   * This gives 0 results.   Search-2:   | makeresults | eval group_by_field="A", other_field_1="1", other_field_2="test1" | append [| makeresults | eval group_by_field="A", other_field_1="2", other_field_2="test2"] | join type=left group_by_field [| makeresults| eval group_by_field="A", inventory_field="upperA~~characterA" ] | makemv inventory_field delim="~~" | search inventory_field="upperA"   * gives 2 results as expected with all fields:   It seems makemv (multi-valued field) does not work inside the join query. Do we know if this is documented or a bug?  
Hi Team, We have a requirement for HP OpenVMS. Has anyone implemented Appdynamics on HP OpenVMS technology? Regards, Mandar Kadam
Hi , I am having json logs which I on-boarded to Splunk   {"body":{"records": {"time": "2020-12-20T13:28:50.2164144Z","MachineGroup": "Windows 10", "Timestamp": "2020-12-20T13:27:18.6679858Z", "De... See more...
Hi , I am having json logs which I on-boarded to Splunk   {"body":{"records": {"time": "2020-12-20T13:28:50.2164144Z","MachineGroup": "Windows 10", "Timestamp": "2020-12-20T13:27:18.6679858Z", "DeviceName": "3242d4e4.dc.democorp.com", "ReportId": 306737}}},"x-opt-sequence-number":159959006,"x-opt-offset":"2713650553292728","x-opt-enqueued-time":1624195823422}   I am trying to remove everything after "}}}" with SEDCMD and my props.conf is below-mentioned   [json_log] LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true category = Custom disabled = false INDEXED_EXTRACTIONS = json KV_MODE = none DATETIME_CONFIG = CURRENT TRUNCATE = 0 SEDCMD-unwantedfields=s/\}\}\}(.*)/g   Fields are not in raw logs, however when expending details can see the field values Any suggestion, what I am doing wrong ? https://regex101.com/r/btYSah/1  
Hello everybody, My first case to this community as I am relatively new to Splunk detailing, would be the following: I am encountering: External search command 'sendfile' returned error code 1. for... See more...
Hello everybody, My first case to this community as I am relatively new to Splunk detailing, would be the following: I am encountering: External search command 'sendfile' returned error code 1. for the below search. The savedsearch returns the correct results while ran independently, but the output is always the same when trying to send an automated report running the job described. Worth mentioning that if I sample the results, the report is sent, otherwise Splunk spits the error I am facing. Do you know where can I look to fix it? Thank you in advance! | savedsearch "savedsearchname" | outputxls [| stats count | eval search=strftime(now(), "File_name_%Y%m%d.xlsx") | fields search] | sendfile "sender" "receiver" [| stats count | eval first=strftime(relative_time(now(),"-1y"), "Title of report - %d/%m/%Y") | eval search = first+strftime(relative_time(now(),"@d")," and %d/%m/%Y") | eval search = "\""+search+"\"" | fields search] "report testing insert text here" [| stats count | eval search=strftime(now(), "File_name_%Y%m%d.xlsx") | fields search] "smtp.server"
Hi, I have installed httpd using the command "yum install httpd" but when i see the status it is showing as not active. ● httpd.service - The Apache HTTP Server Loaded: loaded (/usr/lib/systemd/sy... See more...
Hi, I have installed httpd using the command "yum install httpd" but when i see the status it is showing as not active. ● httpd.service - The Apache HTTP Server Loaded: loaded (/usr/lib/systemd/system/httpd.service; enabled; vendor preset: disabled) Active: inactive (dead) Docs: man:httpd(8) man:apachectl(8). I am not able to start/run apache. How do we configure Apache 2.4.* above version in linux server?
Hello everyone . I'm a new user in splunk and I have a problem .I have a  log document that sent by anywhere and there are too fields in this document. Type of file is Json and most of fields are use... See more...
Hello everyone . I'm a new user in splunk and I have a problem .I have a  log document that sent by anywhere and there are too fields in this document. Type of file is Json and most of fields are useless.For example the fields like brand,file path ,device  are useless and I want to hide this fields and I just want to view the fields named "Stack Trace" and date.How can I do this? If you help me ı would be grateful
パトライトとSplunkの連携方法がわかりません
Hello all,   I am facing an issue below while trying to get the result to add in the dashboard.  Here I am trying to get the count for the servers based on their Overall_Status. | My search |  ... See more...
Hello all,   I am facing an issue below while trying to get the result to add in the dashboard.  Here I am trying to get the count for the servers based on their Overall_Status. | My search |  |search Overall_Status="Success" OR Overall_Status="Failed" OR Overall_Status="Skipped" OR Overall_Status="NA" I will now need to get the result as below.   Success       Failed        Skipped        NA         8                   4                    2                 2   This could then be added to the dashboard. Can you please guide me through this.
I want to upload several csv files and I hope that each csv file can be a separate chart instead of adding to an existing chart.  Maybe I will upload more csv files later, and I want that each of the... See more...
I want to upload several csv files and I hope that each csv file can be a separate chart instead of adding to an existing chart.  Maybe I will upload more csv files later, and I want that each of them can be a separate chart, too. But I can't work out.
Hello Guys, newbie here.    I've got data that's being sent to a generic sourcetype and I want to carve out another sourcetype based on this particular one. Is that possible and are there any ramif... See more...
Hello Guys, newbie here.    I've got data that's being sent to a generic sourcetype and I want to carve out another sourcetype based on this particular one. Is that possible and are there any ramifications to note on doing this?
Hi Team, Good Day!! Please note that we chose the cluster Architecture for Splunk Phantom SOAR and the components are (3 Nodes- Node1, Node2 & Master Node, HA/Load Balancer, PostgreSQL DB, GlusterF... See more...
Hi Team, Good Day!! Please note that we chose the cluster Architecture for Splunk Phantom SOAR and the components are (3 Nodes- Node1, Node2 & Master Node, HA/Load Balancer, PostgreSQL DB, GlusterFS and Splunk embedded) here my concern is how to carry the installation of Splunk embedded on Splunk Phantom, since it's a separate component in my environment. is there any specific installer(RPM) for this? Could you help me with any docs or any sequence of steps for the initial installation of SOAR on Cluster. Thanks in Advance. Regards, Yeswanth M,
Hello All, I am having an issue where log ingestion have delay for hours after I updated Splunk License. License got expired and after 24 hours we got a renewed license but after updating it the lo... See more...
Hello All, I am having an issue where log ingestion have delay for hours after I updated Splunk License. License got expired and after 24 hours we got a renewed license but after updating it the log are coming very late by few hours. Before reaching out to Splunk, I wanted to check if its because of the new License or may
Below is the error I am receiving on my splunkd.log on the Windows Splunk UF. The deployment server functionality is working correctly and able to send apps to the endpoint, just the data from the UF... See more...
Below is the error I am receiving on my splunkd.log on the Windows Splunk UF. The deployment server functionality is working correctly and able to send apps to the endpoint, just the data from the UF to the indexer is not working. When I do a netstat -ano from the Windows host with the UF, there is a connection that is established (as well as the Splunk Indexer side).  07-04-2021 11:28:56.555 -0400 ERROR TcpOutputFd - Read error. An existing connection was forcibly closed by the remote host.  I have made sure that there is a path from the host to Splunk on the FW and there is no inspection that occurs.  I have a Splunk Index cluster with 3 indexers and a Cluster Master that is also has indexer discovery configured. I thought that there might be something wrong with the clustered setup, but i created a AIO Splunk instance just to test and still get the same message. The deployment server feature is functioning just fine. I am able to push out apps the UF. On the server side I get the following error messages on my indexer: 07-04-2021 11:28:55.535 -0400 ERROR TcpInputProc - Error encountered for connection from src=xxx.xxx.xxx.9:37765. Read Timeout Timed out after 600 seconds. Thanks in advance and looking forward to any help.
which props.conf setting does splunk use to extract interesting fields from _raw field. I am trying to use collect command to get _raw data from one index into another. However, it does not extract ... See more...
which props.conf setting does splunk use to extract interesting fields from _raw field. I am trying to use collect command to get _raw data from one index into another. However, it does not extract interesting fields. If I give sourcetype=splunkd. It extracts interesting fields. I understand using a different sourcetype other than stash will take license usage. So, I should be able to create a custom field extraction for the stash source file paths without taking any license. I did a ./splunk btool props list splunkd and this is what it shows.   [splunkd] ADD_EXTRA_TIME_FIELDS = True ANNOTATE_PUNCT = True AUTO_KV_JSON = true BREAK_ONLY_BEFORE = BREAK_ONLY_BEFORE_DATE = True CHARSET = UTF-8 DATETIME_CONFIG = /etc/datetime.xml DEPTH_LIMIT = 1000 DETERMINE_TIMESTAMP_DATE_WITH_SYSTEM_TIME = false EXTRACT-fields = (?i)^(?:[^ ]* ){2}(?:[+\-]\d+ )?(?P<log_level>[^ ]*)\s+(?P<component>[^ ]+) - (?P<event_message>.+) HEADER_MODE = LB_CHUNK_BREAKER_TRUNCATE = 2000000 LEARN_MODEL = true LEARN_SOURCETYPE = true LINE_BREAKER_LOOKBEHIND = 100 MATCH_LIMIT = 100000 MAX_DAYS_AGO = 2000 MAX_DAYS_HENCE = 2 MAX_DIFF_SECS_AGO = 3600 MAX_DIFF_SECS_HENCE = 604800 MAX_EVENTS = 256 MAX_TIMESTAMP_LOOKAHEAD = 40 MUST_BREAK_AFTER = MUST_NOT_BREAK_AFTER = MUST_NOT_BREAK_BEFORE = SEGMENTATION = indexing SEGMENTATION-all = full SEGMENTATION-inner = inner SEGMENTATION-outer = outer SEGMENTATION-raw = none SEGMENTATION-standard = standard SHOULD_LINEMERGE = false TIME_FORMAT = %m-%d-%Y %H:%M:%S.%l %z TRANSFORMS = TRUNCATE = 20000 detect_trailing_nulls = false maxDist = 100 priority = sourcetype = termFrequencyWeightedDist = false   for default stanza, it shows :   [default] ADD_EXTRA_TIME_FIELDS = True ANNOTATE_PUNCT = True AUTO_KV_JSON = true BREAK_ONLY_BEFORE = BREAK_ONLY_BEFORE_DATE = True CHARSET = UTF-8 DATETIME_CONFIG = /etc/datetime.xml DEPTH_LIMIT = 1000 DETERMINE_TIMESTAMP_DATE_WITH_SYSTEM_TIME = false HEADER_MODE = LB_CHUNK_BREAKER_TRUNCATE = 2000000 LEARN_MODEL = true LEARN_SOURCETYPE = true LINE_BREAKER_LOOKBEHIND = 100 MATCH_LIMIT = 100000 MAX_DAYS_AGO = 2000 MAX_DAYS_HENCE = 2 MAX_DIFF_SECS_AGO = 3600 MAX_DIFF_SECS_HENCE = 604800 MAX_EVENTS = 256 MAX_TIMESTAMP_LOOKAHEAD = 128 MUST_BREAK_AFTER = MUST_NOT_BREAK_AFTER = MUST_NOT_BREAK_BEFORE = SEGMENTATION = indexing SEGMENTATION-all = full SEGMENTATION-inner = inner SEGMENTATION-outer = outer SEGMENTATION-raw = none SEGMENTATION-standard = standard SHOULD_LINEMERGE = True TRANSFORMS = TRUNCATE = 10000 detect_trailing_nulls = false maxDist = 100 priority = sourcetype = termFrequencyWeightedDist = false   I verified the data and it is not in json format. So, AUTO_KV_JSON would not apply to it. The only thing I could find in transforms and props.conf which separate fields based upon "=" is   [ad-kv] CAN_OPTIMIZE = True CLEAN_KEYS = True DEFAULT_VALUE = DEPTH_LIMIT = 1000 DEST_KEY = FORMAT = KEEP_EMPTY_VALS = False LOOKAHEAD = 4096 MATCH_LIMIT = 100000 MV_ADD = true REGEX = (?<_KEY_1>[\w-]+)=(?<_VAL_1>[^\r\n]*) SOURCE_KEY = _raw WRITE_META = False   which is being called by    [ActiveDirectory] SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+---splunk-admon-end-of-event---\r\n[\r\n]*) EXTRACT-GUID = (?i)(?!=\w)(?:objectguid|guid)\s*=\s*(?<guid_lookup>[\w\-]+) EXTRACT-SID = objectSid\s*=\s*(?<sid_lookup>\S+) REPORT-MESSAGE = ad-kv # some schema AD events may be very long MAX_EVENTS = 10000 TRUNCATE = 100000    
trying to forward logs from node process that runs on a raspberry pi model 3b the error i get when i try ti run the splunk universal forwarder on my pi running the file command on the splunk... See more...
trying to forward logs from node process that runs on a raspberry pi model 3b the error i get when i try ti run the splunk universal forwarder on my pi running the file command on the splunk executable returns this result:   of course i downloaded the arm version of the universal forwarder. any help will be much appreciated  rpi specs: os details:  
Hi I have a directory that contain 60 bz2 files. Totally 27 GB After 24 hours still index processing not completed! How can I check index status of this directory? (How much remain? How much pass... See more...
Hi I have a directory that contain 60 bz2 files. Totally 27 GB After 24 hours still index processing not completed! How can I check index status of this directory? (How much remain? How much pass?) How can I tune splunk to index compress files more quickly? FYI: there is no issue about license limitation. FYI: I have enough disk space.   any idea? Thanks