All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I am trying to configure the HTTP Event Collector in my lab so that I can do some testing around data queuing but I'm hitting an odd problem.  My setup is a Heavy Forwarder that is configured to sen... See more...
I am trying to configure the HTTP Event Collector in my lab so that I can do some testing around data queuing but I'm hitting an odd problem.  My setup is a Heavy Forwarder that is configured to send to a small cluster of indexers. I can see in the logs where it is making good connections to all of them. When I configured my tokens to test with my test events are being rejected. From another server I issue the following command: curl -k "http://<myip>:8088/services/collector" -H "Authorization: Splunk dded8e66-57f2-44e9-b4a4-42bf231a2e7e" -d '{"event": "Hello, world!", "sourcetype": "manual"}' I get the following response on the issuing server: curl: (52) Empty reply from server And this is what shows up in my splunkd log on my HEC server 04-05-2021 14:36:05.026 -0400 ERROR TcpInputProc - Message rejected. Received unexpected message of size=1347375956 bytes from src=<myip>:46804 in streaming mode. Maximum message size allowed=67108864. (::) Possible invalid source sending data to splunktcp port or valid source sending unsupported payload. I can't imagine my message is really that size. Anyone got an idea what is going on here?
A convenience feature was introduced in 7 (well I noticed it in a Splunk 7 installation and not in 5 and 6) that automatically inserts a new line when a pipe sign ("|") is typed at the end of a line.... See more...
A convenience feature was introduced in 7 (well I noticed it in a Splunk 7 installation and not in 5 and 6) that automatically inserts a new line when a pipe sign ("|") is typed at the end of a line.  I no longer see this behavior in 8.  Is this feature deprecated?  Or is there a setting to enable/disable it?  
Full disclosure, I'm a Salesforce guy rather than a Splunk guy. I'm working with my internal Splunk team to try and ingest my CronTrigger and CronJobDetail objects from my org so I can monitor for so... See more...
Full disclosure, I'm a Salesforce guy rather than a Splunk guy. I'm working with my internal Splunk team to try and ingest my CronTrigger and CronJobDetail objects from my org so I can monitor for some job hang statuses, or when developers make jobs with hard end dates. The Splunk team is getting a 400 error on these objects, and their queries look okay to me. I did find a separate article in here about setting intervals for date predicates (https://community.splunk.com/t5/All-Apps-and-Add-ons/Salesforce-object-response-status-400/m-p/444002) and I've passed that on to them to investigate. I've also suggested they try ingesting a single field from each object and see if they can get anything back. In the meantime, has anybody here ingested these two objects into Splunk? Most of what I'm finding on Google is that a 400 message is a bad query, but can't this error also be thrown if the integration user doesn't have object access? If so I may be at an impasse since these are system objects, not standard or custom. Salesforce has these locked down so tight that even with my level of access I can't view the basic system properties to these, not that I'd be interested in messing with access around them.
Hi, My logs are in following format: {[-] logger: ....... message: .......... severity: Error } {[-] exception: ......... logger: ....... message: .......... severity: Error } my query i... See more...
Hi, My logs are in following format: {[-] logger: ....... message: .......... severity: Error } {[-] exception: ......... logger: ....... message: .......... severity: Error } my query is : ........| rex "\"exception\":\"(<ErrorMsg>.*?)\"" | table Application, ErrorMsg   The issue: As some app logs have key "message" and some logs have both "exception" and "message". How can I change my query that first it checks if there is key exception, if it does get the value of that key. If there is no Key exception check if there is key "message", if it does get the value of that. My current query is able to get the value of exception (if I change exception to message, it gets the value of message. But trying to implement IF or CASE condition here) 
Hi, I've classifying different kind of address mistakes and show the amount of these classes in a bar chart. For example "addresses missing a zip", "street addresses without street name", "street ad... See more...
Hi, I've classifying different kind of address mistakes and show the amount of these classes in a bar chart. For example "addresses missing a zip", "street addresses without street name", "street address without an apartment number", "addresses with a zip 00000", etc. As understood this kind of calculation must be done by defining each rule separately and thus, a basic 'stats count by' doesn't work. I've created the cases with if-statements using one variable for each different rule. Then I put these variables in one stats statement. What I next need is the possibility to drill down to the selected bar and show the addresses belonging to the select bar in a different panel. For example the bar "addresses a zip 00000" could have 4 cases and selecting the bar would show those 4 address in a separate panel like Streetname1 A 5 00000 Helsinki Streenname2 45 00000 Tampere Streetname3 3-5a 56 C 00000 Turku Streetname4 120/3 00000 Rovaniemi Unfortunately, I haven't found any answers how to do this.
Hello fellow Splunkers, is it possible for Splunk to connect to IBM XFE app to get the threat intelligence feeds, I would like to know if someone else has been involved in this process since there i... See more...
Hello fellow Splunkers, is it possible for Splunk to connect to IBM XFE app to get the threat intelligence feeds, I would like to know if someone else has been involved in this process since there is no much information about out there. Thanks so much,
Everyone on our project recently got new CAC cards, and the new cards have a longer ID number on them than before.  As a result, all of our account IDs changed which resulted in almost all of our sea... See more...
Everyone on our project recently got new CAC cards, and the new cards have a longer ID number on them than before.  As a result, all of our account IDs changed which resulted in almost all of our searches being orphaned. I found how to go to the All Configurations and use the Reassign Knowledge Objects to fix them, but we are now left with a recurring system alert for 5 orphaned searches that are still tied to my old ID number.  If I try to filter the Reassign Knowledge Objects page for Orphaned searches, these 5 do not show up. When I go look at the actual searches, they have all been successfully re-assigned and show my new ID number, but the alert persists.   How can I clear this persistent false positive Orphaned Search alert?
we are trying to pull Lastpass log into Splunk . but we are facing issue as noting is getting indexed for Reporting logs in Splunk using. whereas as the shared path and user are working fine. after... See more...
we are trying to pull Lastpass log into Splunk . but we are facing issue as noting is getting indexed for Reporting logs in Splunk using. whereas as the shared path and user are working fine. after the checking the log file found out the below error. 2021-04-05 10:02:37,299 CRITICAL pid=14060 tid=MainThread file=base_modinput.py:log_critical:316 | Lastpass identity collection. Error in forwarding data: Traceback (most recent call last): File "C:\Program Files\Splunk\etc\apps\TA-lastpass\bin\input_module_lastpass_event_reporting.py", line 371, in collect_events event_time = event_time.strftime('%s.%f') ValueError: Invalid format string 2021-04-05 10:02:37,301 ERROR pid=14060 tid=MainThread file=base_modinput.py:log_error:309 | Get error when collecting events. Traceback (most recent call last): File "C:\Program Files\Splunk\etc\apps\TA-lastpass\bin\ta_lastpass\aob_py3\modinput_wrapper\base_modinput.py", line 128, in stream_events self.collect_events(ew) File "C:\Program Files\Splunk\etc\apps\TA-lastpass\bin\lastpass_event_reporting.py", line 68, in collect_events input_module.collect_events(self, ew) File "C:\Program Files\Splunk\etc\apps\TA-lastpass\bin\input_module_lastpass_event_reporting.py", line 391, in collect_events raise e File "C:\Program Files\Splunk\etc\apps\TA-lastpass\bin\input_module_lastpass_event_reporting.py", line 371, in collect_events event_time = event_time.strftime('%s.%f') ValueError: Invalid format string ADD ON : https://splunkbase.splunk.com/app/2633/#/details Could someone please help me to get this issue resolve.
Currently search will display events with "Rejected" File Status, but if this Rejected file gets fixed and then is "Delivered", I want to remove the previous "Rejected" event from the report. How wo... See more...
Currently search will display events with "Rejected" File Status, but if this Rejected file gets fixed and then is "Delivered", I want to remove the previous "Rejected" event from the report. How would Splunk be able to accomplish this?   In other words : Only want Splunk to display results when field "Failures_Reason_RegEx" matches "Rejected", but want to remove all "Rejected" status' that have been fixed and resent, and are now "Delivered" status. Do not want to actually display any "Delivered" status on the report.   I am using the Consumed_FileName_REGEX, SenderRoutingID_RegEX, and ReceiverRoutingId_RegEX in combination with each other as unique identifiers to match each event. See below for base search which shows all "Rejected" & "Delivered" messages - I only want it to show Rejected and remove any Rejected that were Delivered at a later time.   index=CUSTOM-INDEX AND sourcetype="events" AND (host="server1" OR host="server2") AND ("Messaging.Message.MessageRejected" OR "Messaging.Message.PayloadDelivered") | rex field=_raw "\sSenderRoutingId\((?P<SenderRoutingID_RegEX>.*)\)\sReceiverRoutingId" | rex field=_raw "\sReceiverRoutingId\((?P<ReceiverRoutingId_RegEX>.*)\)\sDirection\(" | rex field=_raw "\sMessageState\((?P<File_Status_REGEX>.*)\)\sFinalState\(" | rex field=_raw "\sConsumptionFilename\((?P<Consumed_FileName_REGEX>.*)\)\sProductionFilename\(" | rex field=_raw "PeerAddress\((?P<Delivery_URL_RegEx>.*)\)\sConsumptionFilename\(" | rex field=_raw "Exchange\((?P<Exchange_Name_RegEx>.*)\)\sTransport\(" | rex field=_raw "RejectedReason\((?P<Failures_Reason_RegEx>.*)\)\sCycleId\(" | top limit=50000 Exchange_Name_RegEx, SenderRoutingID_RegEX, File_Status_REGEX, Consumed_FileName_REGEX, ReceiverRoutingId_RegEX, Delivery_URL_RegEx    
Hi, i'm pulling my hair out trying to get ipfix/appflow data flowing into splunk from netscalers. i can see the data coming in but it looks like it's failing to get decoded: errors such as this i... See more...
Hi, i'm pulling my hair out trying to get ipfix/appflow data flowing into splunk from netscalers. i can see the data coming in but it looks like it's failing to get decoded: errors such as this in streamfwd.log:  2021-04-05 12:52:01 WARN [3528] (NetflowDecoder.cpp:1275) stream.NetflowReceiver - NetFlowDecoder::decodeFlow Unable to decode flow set data. No template with id 264 received for observation domain id 0 from device 172.31.113.8 . Dropping flow data set of size 1372 so looking in the splunk_app_stream.log i see these errors after adding the citrix netflow definitions: 2021-04-05 12:41:48,271 ERROR stream:569 - Invalid Stream definition for stream with id netflow -- Validation Error None is not of type 'string'   so it seems to me that there is some kind of problem between the definitions in the netflow file and possibly the citrix.xml vocabulary file but i can't figure out what. if i could change the code so that when the error was thrown it would tell me what element is causing the problem that would be very useful but as it stands i'm in the dark. has anyone got this working? or indeed know how to troubleshoot this? apps installed are: splunk-add-on-for-stream-wire-data_730 splunk-add-on-for-stream-forwarders_730 splunk-app-for-stream_730 splunk-add-on-for-citrix-netscaler_800  
I have something that runs every day but i need to see it only for previous EOM which is also a weekday I have a field as date =2021-03-31 in the logs I am not sure how can I get the previous EOM(w... See more...
I have something that runs every day but i need to see it only for previous EOM which is also a weekday I have a field as date =2021-03-31 in the logs I am not sure how can I get the previous EOM(weekday) and compare it with my date field value. I have already tried answer given here but has not helped. https://community.splunk.com/t5/Archive/Function-To-Return-Last-Weekday/m-p/172413#M25201 Any help appreciated. Thanks in advance.
Hi, I am trying to implement some alerts relying on the "vendor_region" field in the "All_Changes" CIM dataset. The data model and relevant datasets are populated by AWS Cloudtrail logs pulled by th... See more...
Hi, I am trying to implement some alerts relying on the "vendor_region" field in the "All_Changes" CIM dataset. The data model and relevant datasets are populated by AWS Cloudtrail logs pulled by the AWS add-on from an S3 bucket.  While troubleshooting the search, I have noticed that despite the "vendor_region" field being listed in the documentation for CIM All_Changes dataset (I am using version 4.17) the field is not present in the data model in the actual CIM add-on.  Am I missing something? Thanks
Dear Experts,  I am trying to add the data to monitor Cisco logs through Splunk, i am just able to add 1 device only, it is giving error when i am adding more devices. Snapshot of the error is show... See more...
Dear Experts,  I am trying to add the data to monitor Cisco logs through Splunk, i am just able to add 1 device only, it is giving error when i am adding more devices. Snapshot of the error is shown below. Any help regarding this will be appreciated.  
Hi, we're starting with an Splunk Service Now integration, customer is using Quebec version. Is the Splunk Add-on for ServiceNow forward compatible? I.e. the documentation says that version Paris is ... See more...
Hi, we're starting with an Splunk Service Now integration, customer is using Quebec version. Is the Splunk Add-on for ServiceNow forward compatible? I.e. the documentation says that version Paris is supported but current version is Quebec. So is this version already supported in latest version and or when is this planned? What about the release for Rome later this year? https://splunkbase.splunk.com/app/1928/ Thnx in advance!
I'm trying to add Host through on-prem Linux enterprise console Server with below, the machine I want to add is windows&nbsp; in the GUI I added&nbsp;Credential Name which I created through Credenti... See more...
I'm trying to add Host through on-prem Linux enterprise console Server with below, the machine I want to add is windows&nbsp; in the GUI I added&nbsp;Credential Name which I created through Credential Manager&nbsp; and added the private key for the host I want to add, when I try to add the host through the GUI or the command line it tells me as screenshots below&nbsp; Enterprise Console host expansion failed: A failure occurred: Validate SFTP configuration Error message: Task failed: Verify connection on host: controller as user: Abdulrahman.kazamel with a message: Failed to connect to the remote host. Please verify that the hostname and credentials you provided are correct. &nbsp;
Hi.. I have a table(panel 1) with the below columns.. Col_A Col_B And based on the values of Col_B i will have to create a conditional drilldown(for panel 2) But my requirement is that Col_B shou... See more...
Hi.. I have a table(panel 1) with the below columns.. Col_A Col_B And based on the values of Col_B i will have to create a conditional drilldown(for panel 2) But my requirement is that Col_B should be hidden in panel 1..that is,I want to have only Col_A in the panel 1 How do i do that. Please assist
Hi, My Lookup Editor is 2.0.3 version and splunk is 7.3.3. When i am trying to open my lookup file within an any existing app i am getting the below error " The requested lookup file does not exis... See more...
Hi, My Lookup Editor is 2.0.3 version and splunk is 7.3.3. When i am trying to open my lookup file within an any existing app i am getting the below error " The requested lookup file does not exist." I upgraded lookup editor version to 2.6.0 still gettig this error. But i am able to open this lookup while through lookup editor app i am able to see my lookup. Any suggestion on how to fix this? Thanks!
hi, I have one text input field in my dashboard. It is a mandatory field. Want to make sure there is some value in it before getting submitted. But it takes blank too and gets submitted. Like whe... See more...
hi, I have one text input field in my dashboard. It is a mandatory field. Want to make sure there is some value in it before getting submitted. But it takes blank too and gets submitted. Like when we click on the text field and not enter any value too its considering as value and getting submitted. I tried "searchwhenchanged" not working and "Prefix and Suffix" too not working. <input type="text" token="abc" searchWhenChanged="true"> <label>abc</label> <change> <condition match="$value$!=&quot;*&quot;"></condition> </change> <prefix>NOT (</prefix> <suffix>)</suffix> </input> Want to have some value , it should not take blank spaces alone also... 
How can I sort so that I can get the Stage_INT 1st and others after that and below is the output image. Can someone please give me the query Like this      @Anonymous @Anonymous @mayurr98   
Hello, I need to enable alert suppression during maintenance window in splunk ITSI. I have correlation searches where it directly referring to the actual data index and not itsi_summary. How to enab... See more...
Hello, I need to enable alert suppression during maintenance window in splunk ITSI. I have correlation searches where it directly referring to the actual data index and not itsi_summary. How to enable maintenance in such scenario? Thanks, Vijay