All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @sankardevarajan  The configuration you provided is for the OnBase application to send logs to Splunk, not for Splunk configuration itself. You need to configure the OnBase application's .config ... See more...
Hi @sankardevarajan  The configuration you provided is for the OnBase application to send logs to Splunk, not for Splunk configuration itself. You need to configure the OnBase application's .config file to send logs to Splunk. The configuration snippet you provided is for the Hyland.Logging component, which is part of the OnBase application. You need to modify the .config file ( likely Application-Server-Web.config or another relevant config file) on the OnBase Application Server to include the specified route. <Route name="Logging_Local_Splunk" > <add key="Splunk" value="http://your-splunk-heavy-forwarder-or-indexer:8088"/> <add key="SplunkToken" value="your-splunk-http-event-collector-token"/> <add key="DisableIPAddressMasking" value="false" /> </Route>   To receive these logs in Splunk Cloud, you need to: Set up an HTTP Event Collector (HEC) token in your Splunk Cloud instance. Configure the OnBase application to send logs to the HEC endpoint. In Splunk Cloud, you will need to create an HEC token and get the HEC endpoint URL. You can then use this token and endpoint URL in the OnBase application's .config file. The http://localhost:SplunkPort in the configuration should be replaced with the URL of your Splunk HEC endpoint (typically https://http-inputs-<stackName>.splunkcloud.com ) and SplunkTokenNumber should be replaced with the actual HEC token. For more information on configuring HEC in Splunk Cloud, refer to https://docs.splunk.com/Documentation/SplunkCloud/latest/Data/UsetheHTTPEventCollector. For reference the current instructions for creating HEC tokens for Splunk Cloud are: Click Settings > Add Data. Click monitor. Click HTTP Event Collector. In the Name field, enter a name for the token. (Optional) In the Source name override field, enter a name for a source to be assigned to events that this endpoint generates. (Optional) In the Description field, enter a description for the input. (Optional) If you want to enable indexer acknowledgment for this token, click the Enable indexer acknowledgment checkbox. Click Next. (Optional) Make edits to source type and confirm the index where you want HEC events to be stored. See Modify input settings. Click Review. Confirm that all settings for the endpoint are what you want. If all settings are what you want, click Submit. Otherwise, click < to make changes. (Optional) Copy the token value that Splunk Web displays and paste it into another document for reference later. (Optional) Click Track deployment progress to see progress on how the token has been deployed to the rest of the Splunk Cloud Platform deployment. When you see a status of "Done", you can then use the token to send data to HEC. Ensure that the Splunk HEC endpoint is accessible from the OnBase Application Server. If it's not, you may need to set up a Heavy Forwarder to act as an intermediary.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
i want to onboard application logs into splunk cloud.  Hyland.Logging can be configured to send information to Splunk as well as the Diagnostics Console by modifying the .config file of the server. ... See more...
i want to onboard application logs into splunk cloud.  Hyland.Logging can be configured to send information to Splunk as well as the Diagnostics Console by modifying the .config file of the server. To configure Hyland.Logging to send information to Splunk: <Route name="Logging_Local_Splunk" > <add key="Splunk" value="http://localhost:SplunkPort"/> <add key="SplunkToken" value="SplunkTokenNumber"/> <add key="DisableIPAddressMasking" value="false" /> </Route> Configuring Hyland.Logging for Splunk • Application Server • Reader • Product Documentation  i am not understanding where we need to configure above config in Splunk. Much appreciated anyone guide me.
Yes. If you don't have "holes" in your firewall to send data directly from the other components to Qradar, it won't work. You might try to use RULESET in props.conf on indexers instead of TRANSFORMS.
@PickleRick SH, CM & LM don't have connectivity to the remote Qradar, only Indexer is configured the send the syslogs to the remote Qradar, so no point to configure syslog in SH, CM and LM right?
You got a lot of hints already. What have you compiled from them?
Thank you so much! How did you all figure it out? Life saver!
The new CSS uses a flex attribute, which breaks the old definitions. I  had used the syntax below (without the flex:unset) , but since the breaking change, the flex:unset fixes the problem. ... See more...
The new CSS uses a flex attribute, which breaks the old definitions. I  had used the syntax below (without the flex:unset) , but since the breaking change, the flex:unset fixes the problem. #header_row .dashboard-cell { flex:unset; } #header_row .dashboard-cell:nth-child(1) { width:52% !important; } #header_row .dashboard-cell:nth-child(2) { width:24% !important; } #header_row .dashboard-cell:nth-child(3) { width:24% !important; }  
I'm assuming you have the following sort of CSS #header_row .dashboard-cell:nth-child(1) { width:52% !important; } #header_row .dashboard-cell:nth-child(2) ... See more...
I'm assuming you have the following sort of CSS #header_row .dashboard-cell:nth-child(1) { width:52% !important; } #header_row .dashboard-cell:nth-child(2) { width:24% !important; } #header_row .dashboard-cell:nth-child(3) { width:24% !important; } which has stopped working with Splunk 9.4. You need to add the following for each of your row definitions  #header_row .dashboard-cell { flex:unset; } It's the flex attribute that is present in 9.4 variants that breaks things, so this fixes it.
Any working solution for 9.4.x Doesn't seem to be working thus far any of the suggestions.
Hi, I would like to resize the panels that I have in a Splunk row. So I have 3 panels and I referred to some previous posts on doing the panel width resize using CSS. I remember this used to work? B... See more...
Hi, I would like to resize the panels that I have in a Splunk row. So I have 3 panels and I referred to some previous posts on doing the panel width resize using CSS. I remember this used to work? But I can't seem to get this working on my current Splunk dashboard. Due to some script dependencies, I am not able to use Dashboard Studio hence still stuck with the classic XML dashboard. I referred to previous question on this and did exactly like what was mentioned but the panels still appear equally spaced at 33.33% each. <form version="1"> <label>Adjust Width of Panels in Dashboard</label> <fieldset submitButton="false"> <input type="time" token="tokTime" searchWhenChanged="true"> <label>Select Time</label> <default> <earliest>-24h@h</earliest> <latest>now</latest> </default> </input> </fieldset> <row> <panel depends="$alwaysHideCSS$" id="CSSPanel"> <html> <p/> <style> #CSSPanel{ width:0% !important; } #errorSinglePanel{ width:25% !important; } #errorStatsPanel{ width:30% !important; } #errorLineChartPanel{ width:45% !important; } </style> </html> </panel> <panel id="errorSinglePanel"> <title>Splunkd Errors (Single Value)</title> <single> <search> <query>index=_internal sourcetype=splunkd log_level!=INFO | timechart count</query> <earliest>$tokTime.earliest$</earliest> <latest>$tokTime.latest$</latest> <sampleRatio>1</sampleRatio> </search> <option name="colorBy">trend</option> <option name="colorMode">block</option> <option name="drilldown">none</option> <option name="numberPrecision">0</option> <option name="showSparkline">1</option> <option name="showTrendIndicator">1</option> <option name="trellis.enabled">0</option> <option name="trellis.scales.shared">1</option> <option name="trellis.size">medium</option> <option name="trendColorInterpretation">inverse</option> <option name="trendDisplayMode">absolute</option> <option name="unitPosition">after</option> <option name="useColors">1</option> <option name="useThousandSeparators">1</option> </single> </panel> <panel id="errorStatsPanel"> <title>Top 5 Error (Stats)</title> <table> <search> <query>index=_internal sourcetype=splunkd log_level!=INFO | top 5 component showperc=false</query> <earliest>$tokTime.earliest$</earliest> <latest>$tokTime.latest$</latest> <sampleRatio>1</sampleRatio> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> <panel id="errorLineChartPanel"> <title>Splunkd Errors (Timechart)</title> <chart> <search> <query>index=_internal sourcetype=splunkd log_level!=INFO | timechart count</query> <earliest>$tokTime.earliest$</earliest> <latest>$tokTime.latest$</latest> <sampleRatio>1</sampleRatio> </search> <option name="charting.chart">line</option> <option name="charting.drilldown">none</option> <option name="trellis.enabled">0</option> <option name="trellis.scales.shared">1</option> <option name="trellis.size">medium</option> </chart> </panel> </row> </form>
Hello @dshpritz looks like this is "officially" documented at https://splunk.my.site.com/customer/s/article/How-To-Use-Wildcards-with-Sourcetype
Could you help me with guiding me for setting up these whole thing. 
I believe (although I rarely use the event visualisation) that you must specify a  | fields a b c... in your SPL to get fields from the event to show up in the event panel as fields. The XML <field... See more...
I believe (although I rarely use the event visualisation) that you must specify a  | fields a b c... in your SPL to get fields from the event to show up in the event panel as fields. The XML <fields> element is used as a way to limit the display of the available fields from the search, so in order to get those fields there in the first place, you must use the SPL fields command to specify fields you want. Using the table command is not the right way
The Splunk fix is known as SPL-270280.  A fix has been included in the latest version 9.4.2 and backported to supported versions of older releases  9.3.4, 9.2.6 and 9.1.9 https://splunk.my.site.co... See more...
The Splunk fix is known as SPL-270280.  A fix has been included in the latest version 9.4.2 and backported to supported versions of older releases  9.3.4, 9.2.6 and 9.1.9 https://splunk.my.site.com/customer/s/article/Splunk-vulnerability-libcurl-7-32-0-8-9-1-DoS-CVE-2024-7264-TEN-205024
_raw is like ... \"products\": [\"foo\", \"bar\"], ...
It's not that httpout is not supported for logstash, it's that logstash cannot do s2s. Yes, it is confusing but despite sharing some of the low-level mechanics, s2s over http (which is httpout) h... See more...
It's not that httpout is not supported for logstash, it's that logstash cannot do s2s. Yes, it is confusing but despite sharing some of the low-level mechanics, s2s over http (which is httpout) has nothing to do with "normal HEC" .
You can make events generated by local inputs be sent to just one output group. But that will not be pretty. You need to set _TCP_ROUTING key for each input stanza that you want to selectively manag... See more...
You can make events generated by local inputs be sent to just one output group. But that will not be pretty. You need to set _TCP_ROUTING key for each input stanza that you want to selectively manage. That means adding this to every single Splunk's own input. I'd just create a separate app and create inputs.conf in that app containing just this one setting per each input stanza. EDIT: And one more thing - you cannot use both tcpout and httpout at the same time.
so I tried this but end up with same problem  UF--> HF(routing) --> LS( writing to a file)  httpout is definitely not working/supported for logstash . 
exactly , stopping internal logs at UF level does not work however at logstash level it worked . but yeah via HEC it is not possible it seems so far . Still waiting for others to respond may be we cr... See more...
exactly , stopping internal logs at UF level does not work however at logstash level it worked . but yeah via HEC it is not possible it seems so far . Still waiting for others to respond may be we crack something amazing here collectively . Thank you for response though 
Thank you for your response , I have tried below but with that also same problem .  codec => plain { charset => "UTF-8" } codec => plain { charset => "UTF-16LE" }