All Topics

Top

All Topics

Hello everyone,  I turn to you because I have a little problem. I have an MFT server that generates logs in a directory. In this directory the log files are stored in directories that have the name ... See more...
Hello everyone,  I turn to you because I have a little problem. I have an MFT server that generates logs in a directory. In this directory the log files are stored in directories that have the name of the day. And the log files have the name 1000005847456.log. For example, today’s logs 23 April 2024 are stored in the 2024-04-23/ directory.  For now, I have this input.conf file : [monitor:///data/logs/.../100000*.log] disabled=false sourcetype=log4j host=PC followTail=0 index=test_wild  When I launch the Universal Forwarder, it starts listing all files in/data/logs/.../ . And it also starts to send the data in the log directory as of 4 days ago. I am not looking to retrieve the old log data but the log data of today. I don’t understand this behavior of the Universal Forwarder. Could someone help me? 
Hey everyone,  I currently have a use case for which I set up a Splunk Enterprise environment in an Ubuntu VM (VMware) and want to build an app with the Add-on Builder, which uses a Python Script as... See more...
Hey everyone,  I currently have a use case for which I set up a Splunk Enterprise environment in an Ubuntu VM (VMware) and want to build an app with the Add-on Builder, which uses a Python Script as Input Method to make an API-Call to get my data into Splunk. That's the goal at least.   The VM communicates with the Internet just fine (even if via proxy) and my python script gets the data from the API-Endpoint. However, if I try to enter the proxy credentials from my VM into the Configuration of the Add-on Builder I get the following Error: "There was a problem connecting to the App Certification service. The service might not be available at this time, or you might need to verify your proxy settings and try again."  Now, assuming that I did not mess up the proxy credentials, my next best bet would be that I need to give my Splunk environment a certificate to adequately communicate with the proxy. So we finally reach my question:  Where would I need to place such a certificate file in the directory structure, so that the the Splunk add-on app can find it?  
Hi, I'd like to use a text box input field to add a string value into a multiselect in order to use multiselect token to filter out values currently in multiselect (with true) for each search query I... See more...
Hi, I'd like to use a text box input field to add a string value into a multiselect in order to use multiselect token to filter out values currently in multiselect (with true) for each search query I use <input type="text" token="filter_out_text_input" id="filter_out_text_input"> <label>Enter a log event you want to filter out</label> <prefix>"*</prefix> <suffix>*"</suffix> </input> <input type="multiselect" token="filter_out_option" id="filter_out_option"> <label>List to filter out log events</label> <valuePrefix>NOT "*</valuePrefix> <valueSuffix>*"</valueSuffix> <delimiter> OR </delimiter> </input>   . . . <title>$app$ Error Frequency</title> <chart> <search> <query>index="$app$-$env$" logLevel="ERROR" $filter_out_option$ $filter_out_text_input$ | eval filter_out_option="$filter_out_option$" | where isnotnull(filter_out_option) AND filter_out_option!="" | eval filter_out_text_input="$filter_out_text_input$" | where isnotnull(filter_out_text_input) AND filter_out_text_input!="" | multikv | eval ReportKey="error rate" | timechart span=30m count by ReportKey</query> <earliest>$time_range.earliest$</earliest> <latest>$time_range.latest$</latest> <sampleRatio>1</sampleRatio> <refresh>1m</refresh> <refreshType>delay</refreshType> </search> <option name="charting.chart">area</option> <option name="charting.chart.nullValueMode">connect</option> <option name="charting.chart.showDataLabels">all</option> <option name="charting.chart.stackMode">default</option> <option name="charting.drilldown">none</option> <option name="charting.layout.splitSeries">1</option> <option name="refresh.display">progressbar</option> </chart>   I would like to filter out error strings for the above search  Thanks in advance
I'm trying to use an outer join but I am not getting the desired output. Looks like the query in the left has less events than the sub search query.  Could that be the reason for outer join not worki... See more...
I'm trying to use an outer join but I am not getting the desired output. Looks like the query in the left has less events than the sub search query.  Could that be the reason for outer join not working. I can't use STATS because both the queries have multiple indexes & sourcetypes. 
How can I create a custom table in Splunk view that stores some user credentials and How can I create a button that opens the new record form using which users can submit the information in splunk?I ... See more...
How can I create a custom table in Splunk view that stores some user credentials and How can I create a button that opens the new record form using which users can submit the information in splunk?I have attached an image for reference.
I want to add a download/export button which I am able to do so but the issue is the result of the csv is also visible in the panel like below. I want to show only the download button while hiding th... See more...
I want to add a download/export button which I am able to do so but the issue is the result of the csv is also visible in the panel like below. I want to show only the download button while hiding the results panel which I am not able to do.   <row> <panel> <table> <search> <done> <eval token="date">strftime(now(), "%d-%m-%Y")</eval> <set token="sid">$job.sid$</set> </done> <query>index=test</query> <earliest>-24h@h</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> </search> <option name="count">20</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> </table> <html> <a href="/api/search/jobs/$sid$/results?isDownload=true&amp;timeFormat=%25FT%25T.%25Q%25%3Az&amp;maxLines=0&amp;count=0&amp;filename=test_$date$.csv&amp;outputMode=csv" class="button js-button">Download</a> <style> .button { background-color: steelblue; border-radius: 5px; color: white; padding: .5em; text-decoration: none; } .button:focus, .button:hover { background-color: #2A4E6C; color: White; } </style> </html> </panel> </row>
Hello.   We are deploying a new search head in our splunk environment. We are using windows 2019 servers as platform. The nearch head is not working. We can see these errors on the indexer:   WAR... See more...
Hello.   We are deploying a new search head in our splunk environment. We are using windows 2019 servers as platform. The nearch head is not working. We can see these errors on the indexer:   WARN BundleDataProcessor [12404 TcpChannelThread] - Failed to create file E:\Splunk\var\run\searchpeers\[search_head_hostname]-1713866571.e035b54cfcafb33b.tmp\apps\TA-microsoft-graph-security-add-on-for-splunk\bin\ta_microsoft_graph_security_add_on_for_splunk\aob_py2\cloudconnectlib\splunktacollectorlib\data_collection\ta_checkpoint_mng.py while untarring E:\Splunk\var\run\searchpeers\[search_head_hostname]-1713866571.bundle: The system cannot find the path specified. The file name (including the path) exceeds the limit of 260 characters on  windows OS. How can we use this addon?  
Hi All, We have a strange problem here. On a Linux syslog server, the logs from different systems are each saved as a file. These files are monitored by Splunk UF and forwarded to two heavy forwar... See more...
Hi All, We have a strange problem here. On a Linux syslog server, the logs from different systems are each saved as a file. These files are monitored by Splunk UF and forwarded to two heavy forwarders to be saved on the indexer. We have now noticed that the number of events in the Splunk index sometimes differs from the syslog data delivered, sometimes events are missing in the middle. Since reports and alerts are configured on the Splunk data, it is of course essential that ALL events arrive in Splunk. Is such a behavior known, where can I find how many events have been processed on the HFs, for example? Regards Klaus    
Hi All, I have field called filename .SO i want to populate the result from the filename field and i created two joins to separate. Is there any other way without using Join Success File and Fail... See more...
Hi All, I have field called filename .SO i want to populate the result from the filename field and i created two joins to separate. Is there any other way without using Join Success File and Failure File   | join CorrelationId type=left [ | search index=mulesoft applicationName IN (TEST) AND message IN ("*File put Succesfully*" ,"*successful Call*" , "*file processed successfully*" , "*Archive file processed successfully*" , "*processed successfully for file name*") | rename content.Filename as SuccessFileName correlationId as CorrelationId | table CorrelationId SuccessFileName | stats values(*) as * by CorrelationId] | table CorrelationId InterfaceName ApplicationName FileList SuccessFileName Timestamp | join CorrelationId type=left [ | search index=mulesoft applicationName IN (p-oracle-fin-processor , p-oracle-fin-processor-2 , p-wd-finance-api) AND priority IN (ERROR,WARN) | rename content.Filename as FailureFileName correlationId as CorrelationId timestamp as ErrorTimestamp content.ErrorType as ErrorType content.ErrorMsg as ErrorMsg | table FailureFileName CorrelationId ErrorType ErrorMsg ErrorTimestamp    
I have two queries which is giving me two tables, naming Distributed & Mainframe as below - Distributed-     index=idx-esp source="*RUNINFO_HISTORY.*" | rename STATUS as Status "COMP CODE" as Com... See more...
I have two queries which is giving me two tables, naming Distributed & Mainframe as below - Distributed-     index=idx-esp source="*RUNINFO_HISTORY.*" | rename STATUS as Status "COMP CODE" as CompCode APPL as ScheduleName NODE_GROUP as AgentName NODE_ID as HostName CPU_TIME as CPU-Time | eval epoc_end_time=strptime(EndTime,"%Y-%m-%d %H:%M:%S") | eval epoc_start_time=strptime(StartTime,"%Y-%m-%d %H:%M:%S") | eval UpdatedTime=if(isnull(epoc_end_time),_indextime,epoc_end_time) | eval DurationSecs=floor(UpdatedTime - epoc_start_time) | eval Duration=tostring(DurationSecs,"duration") | eval Source=if(like(source,"%RUNINFO_HISTORY%"),"Control-M","ESP") | dedup Source ScheduleName ScheduleDate AgentName HostName JobName StartTime EndTime | table ScheduleDate JobName StartTime EndTime Duration | sort 0 - Duration | head 10     Mainframe-     index=idx-esp source="*RUNINFO_HISTORY.*" DATA_CENTER=CPUA JOB_ID="0*" | eval epoc_end_time=strptime(EndTime,"%Y-%m-%d %H:%M:%S") | eval epoc_start_time=strptime(StartTime,"%Y-%m-%d %H:%M:%S") | eval UpdatedTime=if(isnull(epoc_end_time),_indextime,epoc_end_time) | eval DurationSecs=floor(UpdatedTime - epoc_start_time) | eval Duration=tostring(DurationSecs,"duration") | table ScheduleDate JOB_MEM_NAME StartTime EndTime Duration | sort 0 - Duration | head 10     I am trying to append both the tables into one, using something like this -     index=idx-esp source="*RUNINFO_HISTORY.*" | rename STATUS as Status "COMP CODE" as CompCode APPL as ScheduleName NODE_GROUP as AgentName NODE_ID as HostName CPU_TIME as CPU-Time | eval epoc_end_time=strptime(EndTime,"%Y-%m-%d %H:%M:%S") | eval epoc_start_time=strptime(StartTime,"%Y-%m-%d %H:%M:%S") | eval UpdatedTime=if(isnull(epoc_end_time),_indextime,epoc_end_time) | eval DurationSecs=floor(UpdatedTime - epoc_start_time) | eval Duration=tostring(DurationSecs,"duration") | eval Source=if(like(source,"%RUNINFO_HISTORY%"),"Control-M","ESP") | dedup Source ScheduleName ScheduleDate AgentName HostName JobName StartTime EndTime | table ScheduleDate JobName StartTime EndTime Duration | sort 0 - Duration | head 50 | append [search index=idx-esp source="*RUNINFO_HISTORY.*" DATA_CENTER=CPUA JOB_ID="0*" | eval epoc_end_time=strptime(EndTime,"%Y-%m-%d %H:%M:%S") | eval epoc_start_time=strptime(StartTime,"%Y-%m-%d %H:%M:%S") | eval UpdatedTime=if(isnull(epoc_end_time),_indextime,epoc_end_time) | eval DurationSecs=floor(UpdatedTime - epoc_start_time) | eval Duration=tostring(DurationSecs,"duration") | table ScheduleDate JOB_MEM_NAME StartTime EndTime Duration | sort 0 - Duration | head 50]      The issue is I am trying to add a column named "Log_Source" at the start which tells either Distributed or Mainframe for its corresponding result. I am not sure how to achieve it. Pls help.
Detect and score application vulnerabilities Video Length: 3 min 50 seconds    CONTENTS | Introduction | Video |Resources | About the presenter  In this deep-dive video, Adam Smye-Rum... See more...
Detect and score application vulnerabilities Video Length: 3 min 50 seconds    CONTENTS | Introduction | Video |Resources | About the presenter  In this deep-dive video, Adam Smye-Rumsby explores how Cisco Secure Application detects application vulnerabilities and their potential impact on the business organizations need to identify and prioritize security threats based on risk scores derived from understanding business impact. Cisco Secure Application breaks down complex business transactions, like those in a shopping cart system, into understandable data, exposing the vulnerabilities and helping you prioritize defense strategies against significant security incidents.     Additional Resources  Protecting what matters most with business risk observability  Learn more about Application Security Monitoring on the product page.  About presenter Adam J. Smye-Rumsby Adam Smye-Rumsby, Solutions Engineer Adam J. Smye-Rumsby joined AppDynamics as a Senior Sales Engineer in 2018, after nearly 16 years with IBM across a variety of roles including over five years as a Senior Sales Engineer in the Digital Experience & Collaboration business unit. Since then, he has helped dozens of enterprises and commercial customers improve the maturity of their application monitoring practices.    More recently, Adam has taken on the challenge of developing subject-matter expertise in the application security market. He has contributed to two published books on the use of Java technology and holds patents in AI/ML, Collab, VR, and other technology areas. Reach out to Adam to learn more about how AppDynamics is helping Cisco customers secure their applications in an ever-changing threat landscape.
Hi all I have a question about using relaystate with SAML when using Azure Ad B2C as the Idp we successfully managed to integrate Splunk as SP with AD B2C as Idp using SAML and Custom policies Now... See more...
Hi all I have a question about using relaystate with SAML when using Azure Ad B2C as the Idp we successfully managed to integrate Splunk as SP with AD B2C as Idp using SAML and Custom policies Now we want to redirect the users after successfull authentication to another url the only way forward I could find was via the Relaystate parameter below are all the combinations I tried for the Single Sign On (SSO) URL: https://<tenant-id>.b2clogin.com/<tenant-id>.onmicrosoft.com/B2C_1A_signup_signin/samlp/sso/login?SAML_Request=<base64_SAML_Auth_Request>&RelayState=https%3A%2F%2FredirectWebsite.com https://<tenant-id>.b2clogin.com/<tenant-id>.onmicrosoft.com/B2C_1A_signup_signin/samlp/sso/login?RelayState=https%3A%2F%2FredirectWebsite.com https://<tenant-id>.b2clogin.com/<tenant-id>.onmicrosoft.com/B2C_1A_signup_signin/samlp/sso/login?RelayState=https://redirectWebsite.com I keep getting the error error while parsing relaystate. failed to decode relaystate. any advise on how to embed the relaystate in the SSO
Good morning I am receiving events from windows on a collector with Splunk Edge Processor and it is sending them correctly to the tenant but not to the correct index. According to the data it goes ... See more...
Good morning I am receiving events from windows on a collector with Splunk Edge Processor and it is sending them correctly to the tenant but not to the correct index. According to the data it goes through the pipeline but sends it to the main instead of the index:   This is the spl2 of the pipeline: /* A valid SPL2 statement for a pipeline must start with "$pipeline", and include "from $source" and "into $destination". */ $pipeline = | from $source | eval index = if(isnull(index), "usa_windows", index) | into $destination;
Hello,   I have been receiving the events without format and I have installed the addon in the HF and in cloud.
I would like some help creating a report that will show the seconds diff between my event timestamp and the Splunk landing timestamp. I have the below query which will give me the diff between _in... See more...
I would like some help creating a report that will show the seconds diff between my event timestamp and the Splunk landing timestamp. I have the below query which will give me the diff between _indextime  and  _time  but I would also like the seconds difference between GenerationTime (ie...2024-04-23 12:49:52)    and _indextime. index=splunk_index  sourcetype=splunk_sourcetype | eval tnow = now() | convert ctime(tnow) | convert ctime(_indextime) as Index_Time | eval secondsDifference=_indextime-_time | table Node EventNumber GenerationTime Index_Time, _time, secondsDifference 
Hi    1 bucket stuck at “fixup task pending” state with below error. I tried restarting Splunk, Re-sync and roll but its not working. Can anyone suggest the possible solution in order to troublesho... See more...
Hi    1 bucket stuck at “fixup task pending” state with below error. I tried restarting Splunk, Re-sync and roll but its not working. Can anyone suggest the possible solution in order to troubleshoot the issue   Missing enough suitable candidates to create replicated copy in order to meet replication policy. Missing={ site3:1 }
Hi All, I have created a dashboard for JSON data. There are 2 sets of data in same index. One is Info.metadata{} and another one is Info.runtime_data{} under same index as different events. But bo... See more...
Hi All, I have created a dashboard for JSON data. There are 2 sets of data in same index. One is Info.metadata{} and another one is Info.runtime_data{} under same index as different events. But both of the events have one common field that is "Info.Title". How can i combine these 2 events?  
Hello, I want to fetch the value present in the inputs.conf file(/Splunk/etc/apps/$app/local), ie: [stanza-name] value-name = value How can I retrieve this value and use it inside a python lookup ... See more...
Hello, I want to fetch the value present in the inputs.conf file(/Splunk/etc/apps/$app/local), ie: [stanza-name] value-name = value How can I retrieve this value and use it inside a python lookup script (stored in /Splunk/etc/apps/$app/bin)? thanks,
Hello Everyone, please help me with fetching events from Windows event collector. I installed universal Forwarder on windows server 2022, where all events from computers keep in this server. I am try... See more...
Hello Everyone, please help me with fetching events from Windows event collector. I installed universal Forwarder on windows server 2022, where all events from computers keep in this server. I am trying to fetch all forwarded events from this windows server 2022 to my splunk indexer by splunk agent, but agent sends the events sometimes, not in real time. Can't see some errors in splunkforwarder events or in splunk indexer. Also I used Splunk_TA_Windows to fetch events.
Hi Team, I am looking for an option to monitor the page load performance of a Salesforce Community cloud (built using Lightning Web components) application that run in authenticated mode. We want to... See more...
Hi Team, I am looking for an option to monitor the page load performance of a Salesforce Community cloud (built using Lightning Web components) application that run in authenticated mode. We want to capture the network timings, resource loading and transaction times to name a few.  Is this possible with AppDynamics? If so, please help with required documentations around the same. Thanks.