All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Essentially, you need to do the hard work! First untable the stats from the timechart results, find each user's maximum, sort the results by maximum and user, then count the users and find the "middl... See more...
Essentially, you need to do the hard work! First untable the stats from the timechart results, find each user's maximum, sort the results by maximum and user, then count the users and find the "middle 20", then convert back to chart format. index=os sourcetype=ps (tag=dcv-na-himem) NOT tag::USER="LNX_SYSTEM_USER" | timechart span=1m sum(eval(RSZ_KB/1024/1024)) as Mem_Used_GB by USER useother=false limit=0 | untable _time USER Mem_Used_GB | eventstats max(Mem_Used_GB) as max by USER | sort 0 max USER desc | streamstats dc(USER) as user_number | eventstats dc(USER) as total | where user_number > (total - 20)/2 and user_number < 20+((total - 20)/2) | xyseries _time USER Mem_Used_GB
OK. Let's start at the start index=finder_db AND (host="host1" OR host="host2") AND (("Wonder Exist here") OR ("Message=Limit the occurrence" AND "FinderField=ZEOUS")) This will select the event... See more...
OK. Let's start at the start index=finder_db AND (host="host1" OR host="host2") AND (("Wonder Exist here") OR ("Message=Limit the occurrence" AND "FinderField=ZEOUS")) This will select the events for further processing. But the question is whether you're extracting any fields from those events. Before we're going anywhere further, we need to know whether: 1) The uniqueId field (to which you're referring in subsequent posts in a case-inconsistent manner) is extracted. 2) The "data" field(s) which you want to "merge" are extracted. Generally, the field extraction should be (actually, should already have been) handled at data onboarding stage. When you have this one covered, you can get to the second part - handling the logic behind "joining" your events.
I have used the rex field=msgTxt but I keep getting errors. I'm sorry but I've worked on this for hours, and nothing seems to work. I'm still pretty new to Splunk and this is not in my skill-set. Ma... See more...
I have used the rex field=msgTxt but I keep getting errors. I'm sorry but I've worked on this for hours, and nothing seems to work. I'm still pretty new to Splunk and this is not in my skill-set. Maybe I should start over.. However, the results I'm looking for have slightly changed. The field or log that contains my results are located in msgTxt  and I would like to pull both  Latitude/Longitude values and the WarningMessages.  The field has Latitude and Longitude listed twice. Most of the time the first set will return 0's and the log will always be in this format. The log looks like this: StandardizedAddressService SUCCEEDED - FROM: {"Address1":"63 Somewhere NW ST","Address2":null,"City":"OKLAND CITY","County":null,"State":"OK","ZipCode":"99999-1111","Latitude":97.999,"Longitude":-97.999,"IsStandardized":false,"AddressStandardizationStatus":0,"AddressStandardizationType":0} RESULT: 1 | {"AddressDetails":[{"AssociatedName":"","HouseNumber":"63","Predirection":"NW","StreetName":"Somewhere","Suffix":"ST","Postdirection":"","SuiteName":"","SuiteRange":"","City":"OKLAND CITY","CityAbbreviation":"OKLAND CITY","State":"OK","ZipCode":"99999","Zip4":"1111","County":"Oklahoma","CountyFips":"40109","CoastalCounty":0,"Latitude":97.999,"Longitude":-97.999"Fulladdress1":"63 Somewhere NW ST","Fulladdress2":"","HighRiseDefault":false}],"WarningMessages":[],"ErrorMessages":[],"GeoErrorMessages":[],"Succeeded":true,"ErrorMessage":null} I'm hoping to see the following results: Latitude    Longitude   Latitude   Longitude    WarningMessages 99.2541     -25.214       99.254     -25.214        NULL 00.0000     -00.000        99.254     -21.218       NULL 00.0000     -00.000       00.000    -00.000        Error message with something The results for all of the phrases will be different and I will be searching through1000's of logs. If it's too much work to show both set of the Latitude/Longitude values, then the second set would work. Your help is greatly appreciated..  Thanks    
Thank you for the quick response!   This looks like a great solution - however the reason I was using transaction is because we have new users all the time and I can't pre-add every possible username... See more...
Thank you for the quick response!   This looks like a great solution - however the reason I was using transaction is because we have new users all the time and I can't pre-add every possible username.    I know people say that transaction is awful but for the life of me I've never had a single problem using it.  
Hi all, I'm struggling with an issue related to collecting Fortinet Fortios events through SC4S. If I use UDP protocol I have no issues, but when changing the collecting protocol to TCP the events a... See more...
Hi all, I'm struggling with an issue related to collecting Fortinet Fortios events through SC4S. If I use UDP protocol I have no issues, but when changing the collecting protocol to TCP the events are not interpreted correctly, because the line breaking does not work anymore, and basically I receive a buffer of merged events breaked only by the _raw size limit. My config is the following: Fortinet FW --> (Syslog TCP) --> SC4S --> HEC on Indexer (Splunk Cloud) --> Search Head (Splunk Cloud) If I receive the same events directly with a Splunk instance where the "Fortinet FortiGate Add-On for Splunk" the configuration correctly breaks the events. Here is the additional configuration needed. [source::tcp:1514] SHOULD_LINEMERGE = false LINE_BREAKER = (\d{2,3}\s+<\d{2,3}>) TIME_PREFIX = eventtime= TIME_FORMAT = %s%9N If I try to apply this configuration on the Splunk Cloud SH it does not work. I believe that SC4S or the indexer is not permitting to perform this line breaking configuration on the SH, so I'm unable to make it work. Maybe it's possible to apply some adjustement on SC4S, if anyone already solves this. Regards
In my haste I used user/pass combination in the requests example but I think you actually need to provide an Authorization header; such as in curl: --header 'Authorization: Bearer <yourToken>' You ... See more...
In my haste I used user/pass combination in the requests example but I think you actually need to provide an Authorization header; such as in curl: --header 'Authorization: Bearer <yourToken>' You can get a token via the Admin web UI: https://docs.appdynamics.com/appd/24.x/latest/en/extend-splunk-appdynamics/splunk-appdynamics-apis/api-clients#id-.APIClientsv24.10-generate-access-tokens:~:text=Generate%20the%20Token%20Through%20the%20Administration%20UI%C2%A0 Does this work? (Sorry, I dont have access to an AppD environment at the moment to check!) Will
Hi @sdanayak  Does this work for you? |stats values(*) AS * by UniqueId    Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution... See more...
Hi @sdanayak  Does this work for you? |stats values(*) AS * by UniqueId    Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
@livehybrid Passed that error now, looked to be connectivity issue.    After resolving that issue, I am getting response as 401. Looks to be unauthorized error. I am passing the credentials in thi... See more...
@livehybrid Passed that error now, looked to be connectivity issue.    After resolving that issue, I am getting response as 401. Looks to be unauthorized error. I am passing the credentials in this format. Username =  '' Password = '' users_response = requests.get(f'https://<Controller URL>/controller/api/rbac/v1/users/5',auth=(username,password))   Do we need to include the Account name as well in the authentication?  
Sorry I missed the bit about the timeout - Are you running the python/curl from the same machine as the browser call? First thing that comes to mind is perhaps an egress/connection issue...  
Still not sure what you are expecting from this forum.
Hi @Casial06  I'd probably use a second stats to get the total number, you could use "| eventstats count as totalAccounts" if you want to keep the details of the accounts for your alert.  Did thi... See more...
Hi @Casial06  I'd probably use a second stats to get the total number, you could use "| eventstats count as totalAccounts" if you want to keep the details of the accounts for your alert.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @Ana_Smith1  Do you get anything in the _internal logs from the add-on? Try something like the below as a starting point: index=_internal (ERROR OR WARN) "jira" This search will help identify a... See more...
Hi @Ana_Smith1  Do you get anything in the _internal logs from the add-on? Try something like the below as a starting point: index=_internal (ERROR OR WARN) "jira" This search will help identify any issues related to the Jira integration. Verify Network Connectivity: Ensure there's no network issue preventing Splunk from connecting to Jira. Check firewall rules and proxy settings if applicable. Are you running Splunk on-prem or Splunk Cloud? Test Jira API Connectivity: Use a tool like curl to test the Jira API connectivity using the token. This will help determine if the issue is with the token or the Splunk configuration. curl -u youremail@example.com:your_api_token https://your-company.atlassian.net/rest/api/3/myself For more detailed troubleshooting steps and configuration guidelines, refer to the Splunk Add-on for Jira Cloud documentation.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Error is reporting.
Can you or the customer change the application so it doesn't report the errors in the first place?
Hi @abhi  Your deploymentclient.conf stanza is incorrect, it must be "target-broker:deploymentServer" as below: [target-broker:deploymentServer] targetUri= <string> * The target URI of the deployme... See more...
Hi @abhi  Your deploymentclient.conf stanza is incorrect, it must be "target-broker:deploymentServer" as below: [target-broker:deploymentServer] targetUri= <string> * The target URI of the deployment server. * An example of <uri>: <scheme>://<deploymentServer>:<mgmtPort> For more details check out the deploymentclient.conf docs at https://docs.splunk.com/Documentation/Splunk/9.4.2/Admin/Deploymentclientconf#:~:text=%5Btarget%2Dbroker%3AdeploymentServer,scheme%3E%3A//%3CdeploymentServer%3E%3A%3CmgmtPort%3E  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
<form version="1.1" script="solved3.js ,minor.js, warning.js , critical.js" theme="dark"> <label>SBC Monitoring</label> <init> <set token="rangeColors">"0x118832","0xd41f1f"</set> </init> ... See more...
<form version="1.1" script="solved3.js ,minor.js, warning.js , critical.js" theme="dark"> <label>SBC Monitoring</label> <init> <set token="rangeColors">"0x118832","0xd41f1f"</set> </init> <fieldset submitButton="false"> <input type="checkbox" token="srStatus"> <label>Status</label> <choice value="1">solved</choice> <choice value="0">unsolved</choice> <prefix>(</prefix> <suffix>)</suffix> <valuePrefix>solved=</valuePrefix> <delimiter> OR </delimiter> <default>0</default> <initialValue>1,0</initialValue> <change> <eval token="rangeColors">if(isnotnull(mvfind($form.srStatus$,"0")),"\"0x118832\",\"0xd41f1f\"","\"0x118832\",\"0x118832\"")</eval> </change> </input> </fieldset> <row> <panel> <title>MINOR EVENTS</title> <single> <search> <query>| makeresults count=5 | eval solved=random()%2 ```| inputlookup sbc_minor.csv``` | search $srStatus$ | stats count</query> <earliest>-30d@d</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> </search> <option name="colorBy">value</option> <option name="colorMode">block</option> <option name="drilldown">all</option> <option name="numberPrecision">0</option> <option name="rangeColors">[$rangeColors$]</option> <option name="rangeValues">[0]</option> <option name="refresh.display">progressbar</option> <option name="showSparkline">1</option> <option name="showTrendIndicator">1</option> <option name="trellis.enabled">0</option> <option name="trellis.scales.shared">1</option> <option name="trellis.size">medium</option> <option name="trendColorInterpretation">standard</option> <option name="trendDisplayMode">absolute</option> <option name="unitPosition">after</option> <option name="useColors">1</option> <option name="useThousandSeparators">1</option> <drilldown> <set token="minor">minor</set> <unset token="major"></unset> <unset token="critical"></unset> <unset token="warning"></unset> </drilldown> </single> </panel> <panel> <title>MAJOR EVENTS</title> <single> <search> <query>| makeresults count=5 | eval solved=random()%2 ```| inputlookup sbc_major.csv``` | search $srStatus$ | stats count</query> <earliest>-30d@d</earliest> <latest>now</latest> </search> <option name="colorBy">value</option> <option name="colorMode">block</option> <option name="drilldown">all</option> <option name="numberPrecision">0</option> <option name="rangeColors">[$rangeColors$]</option> <option name="rangeValues">[0]</option> <option name="refresh.display">progressbar</option> <option name="showSparkline">1</option> <option name="showTrendIndicator">1</option> <option name="trellis.enabled">0</option> <option name="trellis.scales.shared">1</option> <option name="trellis.size">medium</option> <option name="trendColorInterpretation">standard</option> <option name="trendDisplayMode">absolute</option> <option name="unitPosition">after</option> <option name="useColors">1</option> <option name="useThousandSeparators">1</option> <drilldown> <set token="major">major</set> <unset token="minor"></unset> <unset token="critical"></unset> <unset token="warning"></unset> </drilldown> </single> </panel> <panel> <title>CRITICAL EVENTS</title> <single> <search> <query>| makeresults count=5 | eval solved=random()%2 ```| inputlookup sbc_critical.csv``` | search $srStatus$ | stats count</query> <earliest>-30d@d</earliest> <latest>now</latest> </search> <option name="colorMode">block</option> <option name="drilldown">all</option> <option name="rangeColors">[$rangeColors$]</option> <option name="rangeValues">[0]</option> <option name="refresh.display">progressbar</option> <option name="useColors">1</option> <drilldown> <set token="critical">critical</set> <unset token="major"></unset> <unset token="minor"></unset> <unset token="warning"></unset> </drilldown> </single> </panel> <panel> <title>WARNING EVENTS</title> <single> <search> <query>| makeresults count=5 | eval solved=random()%2 ```| inputlookup sbc_warning.csv``` | search $srStatus$ | stats count</query> <earliest>0</earliest> <latest></latest> </search> <option name="colorMode">block</option> <option name="drilldown">all</option> <option name="rangeColors">[$rangeColors$]</option> <option name="rangeValues">[0]</option> <option name="refresh.display">progressbar</option> <option name="useColors">1</option> <drilldown> <set token="warning">warning</set> <unset token="major"></unset> <unset token="minor"></unset> <unset token="critical"></unset> </drilldown> </single> </panel> </row> <row> <panel> <title>MINOR ALERTS HISTORY</title> <chart> <search> <query>index=sbc-logs RAISE-ALARM | dedup S | rex field=_raw ".*Severity:(?&lt;Severity&gt;\D+);" | rex field=_raw "\[Time:(?&lt;Time&gt;.*)]" | rex field=Time "(?&lt;date&gt;.*)@" | rex field=_raw "RAISE-ALARM:(?&lt;Alarm_Type&gt;\w+)" | rex max_match=0 field=_raw ": \[(?&lt;Region&gt;\w+)\]" | rex max_match=0 field=_raw "\[\w+\d\](?&lt;message&gt;[^;]+)" | table Alarm_Type Region message IP Severity Time date | search Severity=minor | stats count as Total by date | appendpipe [ stats count | eval Message="No Minor Alerts" | where count==0 | table Message | fields - Alarm_Type Region message IP Severity Time date] | transpose 0 | eval allnulls=1 | foreach row* [ eval allnulls=if(isnull('&lt;&lt;FIELD&gt;&gt;'),allnulls,0) ] | where allnulls=0 | fields - allnulls | transpose 0 header_field=column | fields - column</query> <earliest>0</earliest> <latest></latest> </search> <option name="charting.chart">column</option> <option name="charting.drilldown">none</option> <option name="refresh.display">progressbar</option> </chart> </panel> <panel> <title>MAJOR ALERTS HISTORY</title> <chart> <search> <query>index=sbc-logs RAISE-ALARM | dedup S | rex field=_raw ".*Severity:(?&lt;Severity&gt;\D+);" | rex field=_raw "\[Time:(?&lt;Time&gt;.*)]" | rex field=Time "(?&lt;date&gt;.*)@" | rex field=_raw "RAISE-ALARM:(?&lt;Alarm_Type&gt;\w+)" | rex max_match=0 field=_raw ": \[(?&lt;Region&gt;\w+)\]" | rex max_match=0 field=_raw "\[\w+\d\](?&lt;message&gt;[^;]+)" | table Alarm_Type Region message IP Severity Time date | search Severity=major | stats count as Total by date</query> <earliest>0</earliest> <latest></latest> </search> <option name="charting.chart">column</option> <option name="charting.drilldown">none</option> <option name="refresh.display">progressbar</option> </chart> </panel> <panel> <title>CRITICAL ALERTS HISTORY</title> <chart> <search> <query>index=sbc-logs RAISE-ALARM | dedup S | rex field=_raw ".*Severity:(?&lt;Severity&gt;\D+);" | rex field=_raw "\[Time:(?&lt;Time&gt;.*)]" | rex field=Time "(?&lt;date&gt;.*)@" | rex field=_raw "RAISE-ALARM:(?&lt;Alarm_Type&gt;\w+)" | rex max_match=0 field=_raw ": \[(?&lt;Region&gt;\w+)\]" | rex max_match=0 field=_raw "\[\w+\d\](?&lt;message&gt;[^;]+)" | table Alarm_Type Region message IP Severity Time date | search Severity=critical | stats count as Total by date | appendpipe [ stats count | eval Message="No critical Alerts" | where count==0 | table Message | fields - Alarm_Type Region message IP Severity Time date] | transpose 0 | eval allnulls=1 | foreach row* [ eval allnulls=if(isnull('&lt;&lt;FIELD&gt;&gt;'),allnulls,0) ] | where allnulls=0 | fields - allnulls | transpose 0 header_field=column | fields - column</query> <earliest>0</earliest> <latest></latest> </search> <option name="charting.chart">column</option> <option name="charting.drilldown">none</option> <option name="refresh.display">progressbar</option> </chart> </panel> <panel> <title>WARNING ALERTS HISTORY</title> <chart> <search> <query>index=sbc-logs RAISE-ALARM | dedup S | rex field=_raw ".*Severity:(?&lt;Severity&gt;\D+);" | rex field=_raw "\[Time:(?&lt;Time&gt;.*)]" | rex field=Time "(?&lt;date&gt;.*)@" | rex field=_raw "RAISE-ALARM:(?&lt;Alarm_Type&gt;\w+)" | rex max_match=0 field=_raw ": \[(?&lt;Region&gt;\w+)\]" | rex max_match=0 field=_raw "\[\w+\d\](?&lt;message&gt;[^;]+)" | table Alarm_Type Region message IP Severity Time date | search Severity=warning | stats count as Total by date | appendpipe [ stats count | eval Message="No Minor Alerts" | where count==0 | table Message | fields - Alarm_Type Region message IP Severity Time date] | transpose 0 | eval allnulls=1 | foreach row* [ eval allnulls=if(isnull('&lt;&lt;FIELD&gt;&gt;'),allnulls,0) ] | where allnulls=0 | fields - allnulls | transpose 0 header_field=column | fields - column</query> <earliest>0</earliest> <latest></latest> </search> <option name="charting.chart">column</option> <option name="charting.drilldown">none</option> <option name="refresh.display">progressbar</option> </chart> </panel> </row> <row> <panel depends="$minor$"> <title>Minor Events</title> <table id="sbc_minor_table"> <search> <query>| inputlookup sbc_minor.csv | search $srStatus$ | eval Server_Name=case(IP == "10.2.96.35","US-SOU",IP == "10.82.10.245","KR-SEL",IP == "10.86.164.25","CN-SGH",IP == "10.86.68.25","CN-SHH",IP == "10.86.128.25","CN-SHA" ,IP == "10.20.41.90 ","DE-SLO",IP == "10.150.222.120","DE-BIE")</query> <earliest>-30d@d</earliest> <latest>now</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> <row> <panel depends="$major$"> <title>Major Events</title> <table id="sbc_alarm_table"> <search> <query>| inputlookup sbc_major.csv | search $srStatus$ | eval Server_Name=case(IP == "10.2.96.35","US-SOU",IP == "10.82.10.245","KR-SEL",IP == "10.86.164.25","CN-SGH",IP == "10.86.68.25","CN-SHH",IP == "10.86.128.25","CN-SHA" ,IP == "10.20.41.90 ","DE-SLO",IP == "10.150.222.120","DE-BIE")</query> <earliest>-30d@d</earliest> <latest>now</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> <row> <panel depends="$critical$"> <title>Critical Events</title> <table id="sbc_critical_table"> <search> <query>| inputlookup sbc_critical.csv | search $srStatus$ | eval Server_Name=case(IP == "10.2.96.35","US-SOU",IP == "10.82.10.245","KR-SEL",IP == "10.86.164.25","CN-SGH",IP == "10.86.68.25","CN-SHH",IP == "10.86.128.25","CN-SHA" ,IP == "10.20.41.90 ","DE-SLO",IP == "10.150.222.120","DE-BIE")</query> <earliest>-30d@d</earliest> <latest>now</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> <row> <panel depends="$warning$"> <title>Warning Events</title> <table id="sbc_warning_table"> <search> <query>| inputlookup sbc_warning.csv | search $srStatus$ | eval Server_Name=case(IP == "10.2.96.35","US-SOU",IP == "10.82.10.245","KR-SEL",IP == "10.86.164.25","CN-SGH",IP == "10.86.68.25","CN-SHH",IP == "10.86.128.25","CN-SHA" ,IP == "10.20.41.90 ","DE-SLO",IP == "10.150.222.120","DE-BIE")</query> <earliest>0</earliest> <latest></latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> </form>
Hi @mriemri14  Is "default" an index that definitely exists? If not then the data might end up in main or whatever has been configured in the lastChanceIndex of indexes.conf. Its worth checking the... See more...
Hi @mriemri14  Is "default" an index that definitely exists? If not then the data might end up in main or whatever has been configured in the lastChanceIndex of indexes.conf. Its worth checking the _internal logs for any mention of message_trace - rather than specifically for the source containing message_trace, this is because if the Python file failed before it was able to create the log file then an error may present itself in a different log file. If this doesnt help then I would try some other search terms such as "error" and "microsoft" and then narrow down the results to the time when you expected the input to execute.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Do you use the Splunk Add-on for JIRA on a Splunk Enterprise instance or in your Splunk Cloud env? Have you verified the network connectivity?
1. Choose a custom index that exists for the input 2. Check this index and verify if data flow in 3. If not, check the internal logs of the instance where the addon is configured
Customer wants to suppress because of there are lot of events are creating noise, but also no any error shoing in Flow maps.