All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello yuanliu, fillnull is only working, if there is an empty row existing in addition to the fieldname. In case table2.csv has only the filednames in it, fillnull is not working. I tried many thin... See more...
Hello yuanliu, fillnull is only working, if there is an empty row existing in addition to the fieldname. In case table2.csv has only the filednames in it, fillnull is not working. I tried many things with fillnull, eval as well to add a row, append the new fieldname, but at the end I didn't manage to delete my added row to get only the original status of the lookup ONLY added by a next fieldname.
Thank you for your suggestion. Tried this, but I get a new row in table3.csv field1,field2,comment3 "","","" This is not acceptable for the later process. Empty rows are ok. Changed your spl to ... See more...
Thank you for your suggestion. Tried this, but I get a new row in table3.csv field1,field2,comment3 "","","" This is not acceptable for the later process. Empty rows are ok. Changed your spl to | inputlookup table3.csv | appendpipe [ stats count | eval field1=NULL, field2=NULL, comment3=NULL ] | fields field1, field2, comment3 | outputlookup table3.csv But this is creating an empty lookup table3.csv again.
Hi @PickleRick , I opened this question also on Community to share the problem and (I hope) the solution also in Community than in Slack. Anyway, I am doing some tuning activity on the Indexers and... See more...
Hi @PickleRick , I opened this question also on Community to share the problem and (I hope) the solution also in Community than in Slack. Anyway, I am doing some tuning activity on the Indexers and I had some good result: we configured: On indexers: parallelIngestionPipelines: from 4 to 2 On Search Heads, for each Data Model: backfill range: 1 day, max summarization search time: 1200 seconds, Maximum Concurrent Summarization Searches: 4 Poll Buckets For Data To Summarize: unflagged Automatic Rebuilds: unflagged all these actions reduced the acceleration run times values from 3600 to 7-800 second for Authentication.   Now I'd lite to try to have better results: do you think that passing parallelIngestionPipelines from 2 to 1 could reduce the acceleration run_rime value without creating indexing issues (for the moment we have queues=0 on all Indexers and all queues)?   Are there there other settings that I could try (remembering that this is a production system)?   Ciao and thanks Giuseppe
Hi @CyberSamurai , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Hi @Cole-Potter , probably you're speking of an Heavy Forwarder (as in the title) and not a Deployment Server, also because if the DS has to manage more than 50 clients it requires to be installed i... See more...
Hi @Cole-Potter , probably you're speking of an Heavy Forwarder (as in the title) and not a Deployment Server, also because if the DS has to manage more than 50 clients it requires to be installed in a dedicated server. Anyway, it isn't a good idea to locally index your logs because in this case you have to pay Splunk Cloud and Splunk Enterprise on the HF. The best solution is to forward logs using the HF to Splunk Cloud, andm on Splunk Cloud create an alert that monitors your data flows and sends an email if one stops. On the HF you have to enable TCPIN inputs to receive logs from the Universal Forwarders (managed by the DS) and syslog inputs. Only some little hints, if your resources permit: use two HFs to avoid Single Points of Failure in your architecture, use a Load Balancer to distribute syslog data flows between the two HFs and have HA features, use rsyslog on HFs to receive syslogs (with a file input) and not Splunk network inputs. the last one doesn't depend on resources, so apply it anyway. Ciao. Giuseppe
Hi, Firstly, thank you for the work on this addon and thanks the community that is solving problems helping each other. We have a Splunk Cloud that we want to connect with Jira using this addon. T... See more...
Hi, Firstly, thank you for the work on this addon and thanks the community that is solving problems helping each other. We have a Splunk Cloud that we want to connect with Jira using this addon. The idea we have is to send to Jira all the tickets that will create Splunk and manage them in Jira. When the ticket is closed in Jira, we want to update all the information, comments and updates in the ticket to visualize them in Splunk. Any ideas or URL that would help us configuring this function? Maybe with webhook? Thank you so much, Kindest regards. P.S: Sorry about my english, it is not the best
@mfleitma  You can try to use appendpipe to maintain the header | inputlookup table3.csv | appendpipe [ stats count | eval field1="", field2="", comment3="" ] | fields field1, field2, comment3 |... See more...
@mfleitma  You can try to use appendpipe to maintain the header | inputlookup table3.csv | appendpipe [ stats count | eval field1="", field2="", comment3="" ] | fields field1, field2, comment3 | outputlookup table3.csv   Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
Hello, Livehibrid. Thank you for your help. I wanted to watch this picture. Next question. Does "Search Head Cluster Deployer" use any ports? What ports use SHCD and SH each other? Thanks.
I have a need to share high level metrics (via tstats) from a couple of indexes that a few of my teammates do not have access to. I have a scheduled report, let's call it ScheduledReportA, that is ru... See more...
I have a need to share high level metrics (via tstats) from a couple of indexes that a few of my teammates do not have access to. I have a scheduled report, let's call it ScheduledReportA, that is running that tstats command once a day in the morning. I was planning to use the loadjob command to load the results of that report into a dashboard that my teammates can then filter on and search to get the information they need but I've noticed that the loadjob command only works some of the time for me, and otherwise will return 0 results. I know it is not my search syntax as I have used the same search and sometimes gotten results, sometimes not. Syntax for reference: | loadjob savedsearch="kaeleyt:my_app_name:ScheduledReportA" Some additional information to help rule things out: The loadjob command search is being run in the same app that ScheduledReportA lives in The report always has thousands of results, and yes I've checked this ScheduledReportA is shared with the app and its users dispatch.ttl is set to 2p (which I have always understood to be twice the schedule, which in this case is 24h, so 48h ttl) I don't suspect it to be a permissions issue, or a job expiration issue based on the above but I'm wondering if I'm missing something or if anyone has run into similar issues.
I believe I have things setup similarly but after I get CAC PIN pop up, It just goes to the standard login page.  I can login with my ldap setup and can login with PIV@mil at the login prompt, but sh... See more...
I believe I have things setup similarly but after I get CAC PIN pop up, It just goes to the standard login page.  I can login with my ldap setup and can login with PIV@mil at the login prompt, but shouldn't thing bypass the login once the CAC/PIN is successful?  What am I missing?
Make sure to apply fillnull before table | inputlookup table3.csv | fillnull field1,field2,comment3 value=- | table field1,field2,comment3 | outputlookup table3.csv  
Hi @Cole-Potter  A Splunk Deployment Server should not be used as an indexer or heavy forwarder; its primary role is to manage app deployment to Universal Forwarders. To receive logs, search, and a... See more...
Hi @Cole-Potter  A Splunk Deployment Server should not be used as an indexer or heavy forwarder; its primary role is to manage app deployment to Universal Forwarders. To receive logs, search, and alert on them before forwarding to Splunk Cloud you should use a dedicated heavy forwarder or indexer - not the deployment server. You will also need a license to ingest this data. Do you have an on-premise license in addition to your Splunk Cloud entitlement?  You might also find the following useful regarding index and forwarding data: https://help.splunk.com/en/splunk-enterprise/forward-and-process-data/forwarding-and-receiving-data/9.0/perform-advanced-configuration/route-and-filter-data  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
My End Goal: I would like to be able to leverage our windows Splunk deployment server/Splunk enterprise server to receive logs from universal forwarders and alert off events from that Splunk instance... See more...
My End Goal: I would like to be able to leverage our windows Splunk deployment server/Splunk enterprise server to receive logs from universal forwarders and alert off events from that Splunk instance then forward the logs to Splunk cloud.  Our current architecture includes Splunk cloud which receives events from an ubuntu forwarder which receives logs from syslog and other universal forwarders installed on windows machines across the network.  Deployment server I believe this also forwards logs to Splunk cloud. There were some apps that required installation on a Splunk enterprise instance and we are receiving that data to cloud and the host field has the deployment server name as host. So I think some of those event are forwarded from the deployment server. I don't think those flow through the ubuntu server I am not exactly sure where to start on trying to figure this out. I have leveraged Splunk documentation for building source inputs and really thrived off of that but I have been hammering at this making changes to outputs.conf and had no success.    It does not appear that any events are being index on the Splunk Enterprise/Deployment Server instance.   Thank you for you help in advanced.
Hi @Splunked_Kid  You could try and raise a request to support to have this installed. I've worked with multiple cloud customers who have been able to get this installed via a support case. Other c... See more...
Hi @Splunked_Kid  You could try and raise a request to support to have this installed. I've worked with multiple cloud customers who have been able to get this installed via a support case. Other cloud supported apps (such as Microsoft 365 App for Splunk) list the sankey app as a dependency so hopefully you can get this installed via support.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Ladies and Gentlemen, we got it working  Some quick lessons learned: It turns out you do not need the Extend Key Usage for smartCardLogon when you submit your CSR to the DoD NPE portal. A simple ... See more...
Ladies and Gentlemen, we got it working  Some quick lessons learned: It turns out you do not need the Extend Key Usage for smartCardLogon when you submit your CSR to the DoD NPE portal. A simple TLS Server request with the defaults will work.  httpport must be 8443.  certBasedAuthMethod = Microsoft Universal Principal Name (NOT PIV - which also removes the need for the certBasedUserAuthPivOidList attribute) With that, all we did was follow this tutorial: https://lantern.splunk.com/Splunk_Platform/Product_Tips/Administration/Configuring_Splunk_for_Common_Access_Card_(CAC)_authentication  
Thank you for clarifying how it works!  Sending "time" along with the "event" - fixed the timestamp issue, and setting Indexed Extraction to none - fixed the duplicated fields, as all fields are ess... See more...
Thank you for clarifying how it works!  Sending "time" along with the "event" - fixed the timestamp issue, and setting Indexed Extraction to none - fixed the duplicated fields, as all fields are essentially parsed in the application that feeds the data to the /event endpoint. Thank you!
Sending to the /event endpoint skips the props settings.  Splunk expects the metadata to be included in the HEC packet.  See https://docs.splunk.com/Documentation/Splunk/9.4.2/Data/FormateventsforHTT... See more...
Sending to the /event endpoint skips the props settings.  Splunk expects the metadata to be included in the HEC packet.  See https://docs.splunk.com/Documentation/Splunk/9.4.2/Data/FormateventsforHTTPEventCollector#Event_metadata for the supported metadata fields. Consider adding auto_extract_timestamp=true to the HEC URL to tell Splunk to do timestamp parsing.  See https://splunk.my.site.com/customer/s/article/Timestamp-Not-Extracted-from-JSON-Payload-When-Using-HEC-event-Endpoint-Without-auto-extract-timestamp-true
Thank you for the suggestion! I tried "%Y-%m-%dT%H:%M:%S%:z" - same results (seems like timestamp extraction is ignored   ).  I also validated my time format in PHP and Python strptime("2025-07-0... See more...
Thank you for the suggestion! I tried "%Y-%m-%dT%H:%M:%S%:z" - same results (seems like timestamp extraction is ignored   ).  I also validated my time format in PHP and Python strptime("2025-07-09T15:50:20+00:00", "%Y-%m-%dT%H:%M:%S%z") - it seems to work.    Yes, I do send to /event.  When I tried sending to /raw I get this (seems like it considers the RAW HTTP request data as "data"): It doesn't seem to be related to parsing. 
Which HEC endpoint are you sending to?  The behavior is different depending on the endpoint.  The /event endpoint will ignore props settings, but the /raw endpoint honors them. The TIME_FORMAT value... See more...
Which HEC endpoint are you sending to?  The behavior is different depending on the endpoint.  The /event endpoint will ignore props settings, but the /raw endpoint honors them. The TIME_FORMAT value doesn't match the data.  Try using %Y-%m-%dT%H:%M:%S%:z
Thanks, let me give that a go in the overall solution, but it looks very promising.