All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I created some Services within ITSI using the "Import from Search" and it runs every hour to import anything new along with those entities(that is working well). Example:  machine1, machi... See more...
I created some Services within ITSI using the "Import from Search" and it runs every hour to import anything new along with those entities(that is working well). Example:  machine1, machine2, machine3,machine 4   I also have some other Services that I created that do run every hour as well and those are supposed to just have a service title and a service dependencies to the above services.  Example:  rack-001  Service Dependencies to machine1, machine2                       rack-002  Service Dependencies to machine3, machine 4 The initial search works but as more machines are added I see the search being run but I'm not seeing the Service with the Service Dependencies being created.   Any thoughts?  
My events are as below:  Mon Nov 23 09:21:57 2020 6 10.0.0.3 3783 /root/A/P2/source1/POL.IDM b s i r kumar ssh 0 * Mon Nov 23 09:21:58 2020 5 10.0.0.4 3783 /root/A/P2/.stfs/objects/8a/32bcb75c884c0... See more...
My events are as below:  Mon Nov 23 09:21:57 2020 6 10.0.0.3 3783 /root/A/P2/source1/POL.IDM b s i r kumar ssh 0 * Mon Nov 23 09:21:58 2020 5 10.0.0.4 3783 /root/A/P2/.stfs/objects/8a/32bcb75c884c00175c989636000ba/b14fbda4-6857-4910-9a74-9789b6165b7f/52925c56-3ae2-4f75-bb3a-97622e9223b0/8a332bcb75c884c00175c989751500c3/POL.IDM b n o r kumar ssh 0 * Mon Nov 23 09:15:25 2020 7 10.0.0.2 68 /root/A/P1/.stfs/objects/8a/325cc74705abd017472f907ce0155/12763075-66b1-4a1b-b080-0c5d0a5a0c11/d8c5486a-57ab-4798-a8cc-2bf45f3b975b/8a3325cc74705abd017472f9bbc701c7/WEB.dat a s o p ftp 0 * If i extract the fields i need the below.: event1 field values in RED, event 2 field values in PINK, event 3 field values in BLUE current_time=Mon Nov 23 09:21:57 2020, Mon Nov 23 09:21:58 2020, Mon Nov 23 09:15:25 2020 transfer_time=6, 5, 7 remote_host=10.0.0.3, 10.0.0.4, 10.0.0.2 file_size=3783, 3783, 68 file_path= /root/A/P2/source1/POL.IDM, /root/A/P2/.stfs/objects/8a/32bcb75c884c00175c989636000ba/b14fbda4-6857-4910-9a74-9789b6165b7f/52925c56-3ae2-4f75-bb3a-97622e9223b0/8a332bcb75c884c00175c989751500c3/POL.IDM, /root/A/P1/.stfs/objects/8a/325cc74705abd017472f907ce0155/12763075-66b1-4a1b-b080-0c5d0a5a0c11/d8c5486a-57ab-4798-a8cc-2bf45f3b975b/8a3325cc74705abd017472f9bbc701c7/WEB.dat ->        Few extracts on this field file_path with / as delimiter: --> This i don't know how to handle                 3rd index extracted as account=P2, P2, P3                  last index extracted as file_name= POL.IDM, POL.IDM, WEB.dat                  last but one index (if start with starts with 8a) extracted as route_id = <null, empty>, <null, empty>, 8a3325cc74705abd017472f9bbc701c7 transfer_mode=b, b, a transfer_security=s, n, s transfer_status=i, o, o access_mode=r, r, p user_name=kumar, kumar, <null,empty> --> This i don't know how to handle protocol=ssh, ssh, ftp Can you please help on Field extractions /Search query for this? Thank you
Hi Splunk Community,   I have a list of IP that returned from a search, and would like to parse line by line and do a POST api call to third party and display it on the dashboard in realtime.  Im i... See more...
Hi Splunk Community,   I have a list of IP that returned from a search, and would like to parse line by line and do a POST api call to third party and display it on the dashboard in realtime.  Im including IP in the POST API and receiving reputation of the IP back.   I read the splunk doc but was confused with saved search, sid and etcs.  Can anyone please help me elaborate the steps i need to take to make it happened?  I have knowledge in Python and Bash   Example of my API call. curl -X POST "httpx://api.3rdparty.com/ "Token: 12ab3a1d81124cc323249c7d1c723e39 -i "99.101.22.33"   Thank you.  Im new to Splunk development, please be kind  
Have a below setup added to imputs.conf #MONITOR JAVA LOGS IF THEY EXIST [monitor://C:\Users\*\AppData\LocalLow\Sun\Java\Deployment\logs\*] disabled=0 index=winevents_end_user sourcetype=java W... See more...
Have a below setup added to imputs.conf #MONITOR JAVA LOGS IF THEY EXIST [monitor://C:\Users\*\AppData\LocalLow\Sun\Java\Deployment\logs\*] disabled=0 index=winevents_end_user sourcetype=java What else do I need to do to get this working   and what search do I need to run to get results on this   
Hello, Working on a Splunk Dashboard to pull specific data in return and when exporting to an Excel sheet, it provides the needed data. A breakdown of Per user, per department, avg memory usage per... See more...
Hello, Working on a Splunk Dashboard to pull specific data in return and when exporting to an Excel sheet, it provides the needed data. A breakdown of Per user, per department, avg memory usage per week of the end-users. So when you Export the data to Excel, the information isn’t clean and doesn’t bring down everything, a better break down of Per User, Per Department and taking the avg of that user/machine for a week Here is the code source so far:   <form> <label>End User Computing - Desktop Dashboard Clone HJ</label> <fieldset submitButton="false" autoRun="false"> <input type="dropdown" token="hosttok" searchWhenChanged="true"> <label>Host Selector</label> <search> <query>| metadata type=hosts index=perfmon sourcetype="Perfmon:Process" |table host Department |sort host</query> <earliest>-24@h</earliest> <latest>now</latest> </search> <fieldForLabel>host</fieldForLabel> <fieldForValue>host</fieldForValue> <prefix>"</prefix> <suffix>"</suffix> <choice value="*">All</choice> </input> <input type="time" token="TimerangePicker" searchWhenChanged="true"> <label></label> <default> <earliest>-4h@m</earliest> <latest>now</latest> </default> </input> </fieldset> <row> <panel> <title>Computer Information -Click "HostName" to launch LANDesk remote control via browser</title> <table> <search> <query>index=prd_win_domain_reports earliest=-24h sourcetype=DNS_Records_Info HostName="*" |eval DeviceName=HostName |join type=left DeviceName [search index=prd_dbx_ld_compsys sourcetype="dbmon:kv" earliest=-24h DeviceName="*" |replace "saccap.int/Workstations/*" with "*" in ComputerLocation |replace "CN=*" with "*" in PrimaryOwner |replace "*,OU=Accounts,DC=saccap,DC=int" with "*" in PrimaryOwner |rename ComputerLocation as OU |eval LastUpdate=strftime(_time, "%Y-%m-%d %I:%M:%p")] |rename DeviceName as host |join type=left host [search index=* sourcetype="WMI:MemoryInfo" earliest=-24h host="*" |dedup host DeviceLocator |eval DIMM_GB=((Capacity/1024)/1024/1024) |eventstats sum(DIMM_GB) AS Total by host |dedup host |eval Total=round(Total)." GB" |fillnull value="-" |eval DIMM_GB=if(DIMM_GB="-","-",DIMM_GB." GB") |transaction mvlist=true host |rename Total as Memory | eval Date=strftime(_time, "%Y-%m-%d") | eval Time=strftime(_time, "%I:%M:%S:%p")] |join type=left host [search index=* sourcetype=WMI:ProcessorInfo earliest=-24h host="*" |eval hostsCPU=host.DeviceID |dedup hostsCPU |rename NumberOfCores as Cores |rename MaxClockSpeed as ClockSpeed |rename Name as Processor] |join type=left host [search index=* sourcetype=WMI:Version earliest=-24h host="*" |dedup host |eval LastBootUpTime=replace(LastBootUpTime, "^(\d{4})(\d{2})(\d{2})(\d{2})(\d{2})(\d{2})(.*)", "\2-\3-\1 \4:\5:\6") |rename Caption as OperatingSystem |rename LastBootUpTime as LastReboot] |join type=left host [search index=* sourcetype="WMI:ComputerSystem" earliest=-24h host="*"] |search host=$hosttok$ |table HostName LoginName PrimaryOwner OU IPAddress Model SerialNum ClockSpeed Cores Processor Memory ADAPTERSTRING MEMORY OperatingSystem Version LastReboot LastUpdInvSvr VALastScanDate</query> <earliest>@d</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> </search> <option name="count">1</option> <option name="dataOverlayMode">none</option> <option name="drilldown">cell</option> <option name="percentagesRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">false</option> <drilldown> <link target="_blank">https://$click.value$:4343/index.html</link> </drilldown> </table> </panel> </row> <row> <panel> <title>LANDesk Events</title> <table> <search> <query>index=ivanti_EPM Changed_by="**" Type_of_Change="*" Message="***" | eval Date=strftime(_time, "%Y-%m-%d") | eval Time=strftime(_time, "%I:%M:%S:%p") |rename Changed_on_machine as host |search host=$hosttok$ | table Date Time Changed_by host Type_of_Change Item_name |sort Date, Time</query> <earliest>$TimerangePicker.earliest$</earliest> <latest>$TimerangePicker.latest$</latest> <sampleRatio>1</sampleRatio> </search> <option name="count">20</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> </table> </panel> </row> <row> <panel> <title>CPU Usage</title> <chart> <search> <query>index="perfmon" host=$hosttok$ object=Processor counter="% Processor Time" instance=_Total | timechart bins=1000 minspan=1m avg(Value) as "% Processor Time _Total Avg per Min" by Host</query> <earliest>$TimerangePicker.earliest$</earliest> <latest>$TimerangePicker.latest$</latest> <refresh>2m</refresh> <refreshType>delay</refreshType> </search> <option name="charting.axisLabelsX.majorLabelStyle.overflowMode">ellipsisNone</option> <option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option> <option name="charting.axisTitleX.visibility">visible</option> <option name="charting.axisTitleY.visibility">collapsed</option> <option name="charting.axisTitleY2.visibility">visible</option> <option name="charting.axisX.scale">linear</option> <option name="charting.axisY.scale">linear</option> <option name="charting.axisY2.enabled">0</option> <option name="charting.axisY2.scale">inherit</option> <option name="charting.chart">line</option> <option name="charting.chart.bubbleMaximumSize">50</option> <option name="charting.chart.bubbleMinimumSize">10</option> <option name="charting.chart.bubbleSizeBy">area</option> <option name="charting.chart.nullValueMode">gaps</option> <option name="charting.chart.showDataLabels">none</option> <option name="charting.chart.sliceCollapsingThreshold">0.01</option> <option name="charting.chart.stackMode">default</option> <option name="charting.chart.style">shiny</option> <option name="charting.drilldown">all</option> <option name="charting.layout.splitSeries">0</option> <option name="charting.layout.splitSeries.allowIndependentYRanges">0</option> <option name="charting.legend.labelStyle.overflowMode">ellipsisMiddle</option> <option name="charting.legend.placement">right</option> <option name="height">249</option> <option name="refresh.display">progressbar</option> </chart> </panel> <panel> <title>Application Alerts</title> <chart> <search> <query>index=prd_win_wrk_app host=* (EventCode=1002 OR EventCode=1000) |eval Date=strftime(_time, "%Y-%m-%d") |eval Time=strftime(_time, "%I:%M:%p") |eval Faulting_application_path=lower(Faulting_application_path) |replace "c:\users\*\phprod.exe" with "PHPROD" in Faulting_application_path |replace "c:\program files*\outlook.exe" with "Outlook" in Faulting_application_path |replace "c:\program files*\winword.exe" with "Word" in Faulting_application_path |replace "c:\program files*\excel.exe" with "Excel" in Faulting_application_path |replace "c:\program files*\symphony.exe" with "Symphony" in Faulting_application_path |replace "c:\firmapps*\dashboard.exe" with "Ops Dashboard" in Faulting_application_path |replace "c:\program files (x86)*\ssms.exe" with "SQL Management Studio" in Faulting_application_path |replace "c:\program files*\ciscojabber.exe" with "Jabber" in Faulting_application_path |replace "c:\blp*" with "Bloomberg" in Faulting_application_path |replace "c:\*neovest.exe" with "Neovest" in Faulting_application_path |replace "c:\users*\ciscocollabhost.exe" with "Cisco Spark" in Faulting_application_path |fillnull NA |eval Application_Path=lower(Application_Path) |replace "c:\users\*\phprod.exe" with "PHPROD" in Application_Path |replace "c:\program files*\outlook.exe" with "Outlook" in Application_Path |replace "c:\program files*\winword.exe" with "Word" in Application_Path |replace "c:\program files*\excel.exe" with "Excel" in Application_Path |replace "c:\program files*\symphony.exe" with "Symphony" in Application_Path |replace "c:\firmapps*\dashboard.exe" with "Ops Dashboard" in Application_Path |replace "c:\program files (x86)*\ssms.exe" with "SQL Management Studio" in Application_Path |replace "c:\program files*\ciscojabber.exe" with "Jabber" in Application_Path |replace "c:\blp*" with "Bloomberg" in Application_Path |replace "c:\*neovest.exe" with "Neovest" in Application_Path |replace "c:\users*\ciscocollabhost.exe" with "Cisco Spark" in Application_Path |fillnull NA |search Application_Path=Excel OR Faulting_application_path=Excel OR Application_Path=Outlook OR Faulting_application_path=Outlook OR Application_Path=Word OR Faulting_application_path=Word OR Application_Path=Bloomberg OR Faulting_application_path=Bloomberg OR Application_Path="Ops Dashboard" OR Faulting_application_path="Ops Dashboard" OR Application_Path=Neovest OR Faulting_application_path=Neovest |sort _time, limit=0 | where isnotnull(user) | fillnull NA | fields * | stats count(Faulting_application_path) AS count BY Faulting_application_path | sort Faulting_application_path</query> <earliest>@d</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> <refresh>5m</refresh> <refreshType>delay</refreshType> </search> <option name="charting.axisLabelsX.majorLabelStyle.overflowMode">ellipsisNone</option> <option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option> <option name="charting.axisTitleX.visibility">collapsed</option> <option name="charting.axisTitleY.text">Count</option> <option name="charting.axisTitleY.visibility">visible</option> <option name="charting.axisTitleY2.visibility">visible</option> <option name="charting.axisX.abbreviation">none</option> <option name="charting.axisX.scale">linear</option> <option name="charting.axisY.abbreviation">none</option> <option name="charting.axisY.scale">linear</option> <option name="charting.axisY2.abbreviation">none</option> <option name="charting.axisY2.enabled">0</option> <option name="charting.axisY2.scale">inherit</option> <option name="charting.chart">pie</option> <option name="charting.chart.bubbleMaximumSize">50</option> <option name="charting.chart.bubbleMinimumSize">10</option> <option name="charting.chart.bubbleSizeBy">area</option> <option name="charting.chart.nullValueMode">gaps</option> <option name="charting.chart.showDataLabels">all</option> <option name="charting.chart.sliceCollapsingThreshold">0.01</option> <option name="charting.chart.stackMode">stacked</option> <option name="charting.chart.style">shiny</option> <option name="charting.drilldown">all</option> <option name="charting.layout.splitSeries">1</option> <option name="charting.layout.splitSeries.allowIndependentYRanges">0</option> <option name="charting.legend.labelStyle.overflowMode">ellipsisMiddle</option> <option name="charting.legend.mode">standard</option> <option name="charting.legend.placement">right</option> <option name="charting.lineWidth">2</option> <option name="refresh.display">progressbar</option> <option name="trellis.enabled">0</option> <option name="trellis.scales.shared">1</option> <option name="trellis.size">medium</option> <drilldown> <link target="_blank">https://support.office.com/en-us/article/fixes-or-workarounds-for-recent-issues-in-outlook-for-windows-ecf61305-f84f-4e13-bb73-95a214ac1230?ui=en-US&amp;rs=en-US&amp;ad=US</link> </drilldown> </chart> </panel> <panel> <title>Memory Usage</title> <chart> <search> <query>index="perfmon" host=$hosttok$ object=Memory counter="Available MBytes" instance=*| timechart bins=1000 minspan=1m avg(Value) as "Available MBytes Avg per Min" By Host</query> <earliest>$TimerangePicker.earliest$</earliest> <latest>$TimerangePicker.latest$</latest> <refresh>2m</refresh> <refreshType>delay</refreshType> </search> <option name="charting.axisLabelsX.majorLabelStyle.overflowMode">ellipsisNone</option> <option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option> <option name="charting.axisTitleX.visibility">visible</option> <option name="charting.axisTitleY.visibility">visible</option> <option name="charting.axisTitleY2.visibility">visible</option> <option name="charting.axisX.scale">linear</option> <option name="charting.axisY.scale">linear</option> <option name="charting.axisY2.enabled">0</option> <option name="charting.axisY2.scale">inherit</option> <option name="charting.chart">line</option> <option name="charting.chart.bubbleMaximumSize">50</option> <option name="charting.chart.bubbleMinimumSize">10</option> <option name="charting.chart.bubbleSizeBy">area</option> <option name="charting.chart.nullValueMode">connect</option> <option name="charting.chart.showDataLabels">none</option> <option name="charting.chart.sliceCollapsingThreshold">0.01</option> <option name="charting.chart.stackMode">default</option> <option name="charting.chart.style">shiny</option> <option name="charting.drilldown">all</option> <option name="charting.layout.splitSeries">0</option> <option name="charting.layout.splitSeries.allowIndependentYRanges">0</option> <option name="charting.legend.labelStyle.overflowMode">ellipsisMiddle</option> <option name="charting.legend.placement">right</option> <option name="refresh.display">progressbar</option> </chart> </panel> </row> <row> <panel> <title>User Login Information - Last 7 days</title> <table> <search> <query>sourcetype=WinEventLog:Security (EventCode=4624 (Logon_Type=2 OR Logon_Type=10 OR Logon_Type=7)) OR EventCode=4801 OR EventCode=4803 AND dest_nt_domain=SACCAP host=$hosttok$ | eval Date=strftime(_time, "%Y-%m-%d") | eval Time=strftime(_time, "%I:%M:%S:%p") | sort _time limit=0 | dedup Date, user | lookup AD_Account_Info.csv LogonName as Account_Name OUTPUT Name, Department, Office, FirstName, LastName | replace "*.saccap.int" with "*" in ComputerName | replace "*.saccap.int" with "*" in dest_nt_host | eval Source=dest_nt_domain | eval Country=Office | table Date Time ComputerName user FirstName LastName Department Office | sort -Date limit=0</query> <earliest>-7d@h</earliest> Thank you!
Hello Splunker's I programmed a saved search with a send webhook data action to send the result in json format. I noticed that the data sent contains additional information like app name eand result... See more...
Hello Splunker's I programmed a saved search with a send webhook data action to send the result in json format. I noticed that the data sent contains additional information like app name eand result_link: INFO -: {"app" => "search", "results_link" => "http: // splk-sh: 8000 / app / search / search? .... In fact, I don't want to display this information on my results; i searched in advanced actions i found: action.webhook.command: sendalert $action_name$ results_file="$results.file$" results_link="$results.url$" i tried to delete result_link but it doesn't work.  did you encounter this problem on whebook or even email action can be the same. Thank you
I'm doing a silent upgrade on my Enterprise Console from 20.3 to 20.11.  Below is the command I am trying to get working.  It keeps failing about an empty text file, as shown below in red. ./platfor... See more...
I'm doing a silent upgrade on my Enterprise Console from 20.3 to 20.11.  Below is the command I am trying to get working.  It keeps failing about an empty text file, as shown below in red. ./platform-setup-x64-linux-20.11.3.23827.sh -q -varfile /opt/AppD/installer/.install4j/response.varfile Unpacking JRE ... Preparing JRE ... Starting Installer ... The installation directory has been set to /opt/AppD/installer. Verifying if the libaio package is installed. /opt/AppD/installer/checkLibaio.sh Verifying if the libnuma package is installed. /opt/AppD/installer/checkLibnuma.sh Verifying the libc version. /opt/AppD/installer/checkLibc.sh Verifying if the zoneinfo directory exists. if not, check tzdata package Verifying if the libncurses5 package is installed. /opt/AppD/installer/checkNcurses.sh Verifying if curl is installed. /opt/AppD/installer/detect_os_packages.sh This text field must not be left empty. Below is my response file: cat /opt/AppD/installer/.install4j/response.varfile # install4j response file for AppDynamics Enterprise Console 20.3.1-21882 platformAdmin.databasePort=3377 platformAdmin.port=9191 platformAdmin.useHttps$Boolean=true serverHostName=<Hostname removed for this discussion> sys.adminRights$Boolean=false sys.installationDir=/opt/AppD/installer sys.languageId=en platformAdmin.databaseRootPassword=<Password I removed for this discussion> platformAdmin.adminPassword=<Password I removed for this discussion> Any help?
I need some suggestions on how to make this query more efficient.  We would like distinct count of workstation by sitename - with 3 columns of the count that shows within the past 24 hours, 30 days, ... See more...
I need some suggestions on how to make this query more efficient.  We would like distinct count of workstation by sitename - with 3 columns of the count that shows within the past 24 hours, 30 days, and 60 days.     index=sample sourcetype="SAMPLE:DATA" earliest=-24h | stats dc(workstation_name) AS "Past 24 Hours EUD Count" by sitename | appendcols [ search index=sample sourcetype="SAMPLE:DATA" earliest=-30d | stats dc(workstation_name) AS "Past 30 Days EUD Count" by sitename ] | appendcols [ search index=sample sourcetype="SAMPLE:DATA" earliest=-60d | stats dc(workstation_name) AS "Past 60 Days EUD Count" by sitename ] | appendcols [ search index=sample sourcetype="SAMPLE:DATA" earliest=-90d | stats dc(workstation_name) AS "Past 90 Days EUD Count" by sitename ]
I am using a bin of 10 minutes with stats for the past hour. What I am running into is that when doing so not all items in my stats command have a count for one of the buckets. For example one might ... See more...
I am using a bin of 10 minutes with stats for the past hour. What I am running into is that when doing so not all items in my stats command have a count for one of the buckets. For example one might show up for the 10, 20, 40 minute buckets but, I want to the 30 and 50 minute buckets to show blank values. What is the best way to accomplish this? Fillnull does not work for this since there is no null value, the value just is not showing at all.
I am looking to convert a field labeled "name" to populate email. I am wanting to have a search that takes a name in a field and formats it to match our certain naming convention. If the field has ... See more...
I am looking to convert a field labeled "name" to populate email. I am wanting to have a search that takes a name in a field and formats it to match our certain naming convention. If the field has a name of Jim Smith, I want a second field to covert the name of Jim Smith to Jim.Smith@domain.com I am looking to replace the space between the first and last name with a "." and add @domain.com after the lastname. What the best way about going about this?
Is there any guidance on finding the proper frozen bucket I would need for a specific time frame?  
Hello! It's me again! I'm looking for a way to consolidate multiple different REX commands into a single command. The 4 Rex Expressions I'm working with are: | rex field=pluginText " Model : (?... See more...
Hello! It's me again! I'm looking for a way to consolidate multiple different REX commands into a single command. The 4 Rex Expressions I'm working with are: | rex field=pluginText " Model : (?<Model>.+)" | rex field=pluginText " Software version : (?<Software_version>.+)" | rex field=pluginText " Version source : (?<Version_source>.+)" | rex field=pluginText " Fixed version : (?<Fixed_version>.+)" Which are all designed to extract data from a single field (pluginText) The information in plugintext (the input) is as follows: <plugin_output> Model : Q6042-E Software version : 5.55.1.2 Version source : HTTP Fixed version : 6.50.1.2 </plugin_output> That's literally everything inside it. What I've done is 4 different REX commands for Model, Software version, Version source, and Fixed version. But now my teacher is asking me to take those 4 REX commands and turn them into one. This is supposed to be complicated because there's carriage returns in the data.  He says I should be able to do this with the \n command, for new line, but I've tried it a couple of times and it's not working in Splunk. Can someone explain how I should go about doing this? Thank you in advance, I will give karma for helping!
Currently I have splunk cloud working without problems and I need to install an APP that allows me to view graphics from a mobile device. After finding an error message and submitting the ticket, th... See more...
Currently I have splunk cloud working without problems and I need to install an APP that allows me to view graphics from a mobile device. After finding an error message and submitting the ticket, they reply to me stating that the error message is due to it being only compatible with python 3 It scares me to request that they update to python 3 and the solution stops working. I would be grateful in simple words and if possible without share the update links since I have read it and it is not clear to me, if what I need is to know how a change from python 2 to python 3 would affect
Hello In the search as below:   index=_audit action=alert_fired ss_app=app_name | eval alert_severity = case (severity==1,"Information",severity==2,"Low", severity==3,"Medium",severity==4,"High... See more...
Hello In the search as below:   index=_audit action=alert_fired ss_app=app_name | eval alert_severity = case (severity==1,"Information",severity==2,"Low", severity==3,"Medium",severity==4,"High",severity==5,"Critical") | fields _time ss_name severity trigger_time alert_severity | stats earliest(trigger_time) as min_time, latest(trigger_time) as max_time, sparkline(count) as Spark_line, count by ss_name alert_severity | eval min_time = strftime(min_time, "%Y-%m-%d %H:%M:%S") | eval max_time = strftime(max_time, "%Y-%m-%d %H:%M:%S") | table ss_name, min_time, max_time count alert_severity | rename ss_name as "Alert Name" min_time as "Start Time" max_time as "End Time" count as "Number of Alerts" alert_severity as "Criticality"   The Sparkline produced is correct in count (image001.png) and presentation. But it is shkrinked to a very small size and does not look good. So I try to change from: sparkline(count,30m) as Spark_line  ->  sparkline(count,30m) as Spark_line This time the layout is much better, the result is OK (image002.png), but the Graphic Presantation (points) are wrong. How can I have the right graphical presentation  by keeping sparkline wide enough? image 001 image 002 best regard Altin
Hi all. A silly question. I have the below searchresult (in my application i'm printing logs for different processing status of a specific order: I would like to know if (and how) would it be possibl... See more...
Hi all. A silly question. I have the below searchresult (in my application i'm printing logs for different processing status of a specific order: I would like to know if (and how) would it be possible to extract the number of orders for which i do have a "processStarted" log, and not an "orderSaved" one. And another query to extract the orderNumbers for these case. orderNumber action 123 processStarted 123 orderSaved 125 processStarted 125 orderSaved 301 processStarted As per the above example, i would like 1) a query to extract the count (1 in this case, since only order 301 don't have an "orderSaved" entry) 2) a query to extract the orderNumbers for which i do have "processStarted", but not "orderSaved"). Only 301 in this case Which operation you would suggest me to investigate? Can you point me to some examples?
Hello all -  I'm creating a setup view (migrating from setup.xml) for an app and would like to continue to leverage the custom eai python handler that is already in place. Can anyone point me towar... See more...
Hello all -  I'm creating a setup view (migrating from setup.xml) for an app and would like to continue to leverage the custom eai python handler that is already in place. Can anyone point me towards an example of this? I may be using web.conf incorrectly as I can access the handler via port 8089 but when I attempt to call the endpoint from the setup view via the splunkjs sdk it generates a 404 error. Once I resolve this issue I believe I should be able to pass all the parameters necessary into data: {TODO} in order to use the eai handler the same way we were when using setup.xml   app.js       callScript(){ $.ajax({ type: "POST", url: "/servicesNS/-/<app_name>/configureMyApp/handleConfig", data: {TODO}, contentType: "application/json; charset=utf-8", dataType: "json", success: function(data) { console.log("success returned"); console.log(data); }, error: function(xhr){ console.log("Error ", xhr); } }); }         restmap.conf     [admin:configureMyApp] match = /configureMyApp members = handleConfig [admin_external:handleConfig] handlertype = python python.version = python3 handlerfile = handler.py handleractions = list, edit handlerpersistentmode = false       web.conf     [expose:configureMyApp] methods = GET, POST pattern = configureMyApp [expose:handleConfig] pattern = configureMyApp/* methods = GET, POST    
2020-11-30T23:59:46.101621+00:00 fdb2.fdb-us-south-002 2020-11-30T23:59:45Z { "Severity": "10", "Time": "1606780785.516014", "Type": "SomewhatSlowRunLoopTop", "ID": "0000000000000000", "Elapsed":"0.0... See more...
2020-11-30T23:59:46.101621+00:00 fdb2.fdb-us-south-002 2020-11-30T23:59:45Z { "Severity": "10", "Time": "1606780785.516014", "Type": "SomewhatSlowRunLoopTop", "ID": "0000000000000000", "Elapsed":"0.0734675", "Machine": "10.185.175.43:4501", "LogGroup": "default" }   I want to how Can i extract "severity": "10"  & Machine ip  in the search from the logs  and put. it in a table format.
Hey, is there anyone there good with rex expressions? I've been given a task by my boss, to extract 4 new fields from the data in one fiend, pluginText. The data that's currently in pluginText is ... See more...
Hey, is there anyone there good with rex expressions? I've been given a task by my boss, to extract 4 new fields from the data in one fiend, pluginText. The data that's currently in pluginText is as follows: <plugin_output> Model : Q6042-E Software version : 5.55.1.2 Version source : HTTP Fixed version : 6.50.1.2 </plugin_output> I thought I'd start by just cracking one field and if I can get one, I can sort out the others from that one as a working model. So I figured I'd start with Version source, since the end result I want is for a new field "Version source" to just display 4 letters: "HTTP".  The problem is that it's not working, what I'm trying. I thought THIS might work: | rex field=pluginID "(?<Version source>\w\w\w\w)" but it's just giving me an error “Regex: syntax error in subpattern name (missing terminator)” What am I doing wrong? I'm newbie at rex expressions and reviewing the documents Splunk's put out hasn't helped much. 
Hello everyone,I have this query- index="dpsnapitt" AND (class= "GRADE 12 B" OR class= "GRADE 12 B *") AND (day="DAY 4" OR NOT day=*) | rename "start time" as start, "end time" as end | rename cla... See more...
Hello everyone,I have this query- index="dpsnapitt" AND (class= "GRADE 12 B" OR class= "GRADE 12 B *") AND (day="DAY 4" OR NOT day=*) | rename "start time" as start, "end time" as end | rename class as Class | rename email as organizer_email | dedup subject, Class, organizer_email, start | join type=outer organizer_email [search index="dpsn_teachers" earliest=0 latest= now | rename name.fullName as teacher | rename primaryEmail as organizer_email] |join type=outer organizer_email max=0 [search index="dpsn_meet" | where email== organizer_email] | rex field=date "(?<yy>[^\.]*)\-(?<mm>[^\.]*)\-(?<dd>[\S]*)T(?<hh>[^\.]*)\:(?<min>[^\.]*)\:(?<sec>[^\.]*)\." | eval ndatetime = yy.mm.dd.hh.min.sec | eval _time=strptime(ndatetime,"%Y%m%d%H%M%S") + 19800 | eval Time=strftime(_time, "%H:%M") | eval p = strptime(start,"%H:%M")-1020|eval q = strftime(p,"%H:%M") | eval r = strptime(end,"%H:%M") |eval s = strftime(r,"%H:%M") | eval meet_start_time = strftime((strptime(Time,"%H:%M")-duration_seconds), "%H:%M") | eval z = if((meet_start_time >= q AND meet_start_time <= s), 1, 0) | eval meeting_code = case( z == 1, meeting_code, z==0, "N/A") | dedup organizer_email, meeting_code, start, end | table Class, teacher, organizer_email, subject, "start", "end", meeting_code | sort - period - day | reverse     If I run this query on 4 Dec 2020,it is producing following result-see image. Those rows where meeting code is present is getting duplicated with meeting_code "N/A".I dont want that duplicated row.   Please help..I would be very thankful.      
Good day folks, After migrating and upgrading from 2016 DataCenter 'All-in-one' 8.0.0 to 2019 DataCenter Core 'All-in-one' 8.1.0.1 Build:24fd52428b5a I have noticed a few things don't seem to be pop... See more...
Good day folks, After migrating and upgrading from 2016 DataCenter 'All-in-one' 8.0.0 to 2019 DataCenter Core 'All-in-one' 8.1.0.1 Build:24fd52428b5a I have noticed a few things don't seem to be populating with the new server name. For example the Monitoring Console-> Indexing-> Indexing Performance: instance "Instance" Drop-down input box has the Old Server Name (only) and does not thereby produce any metrics for the new server.  I have verified and validated new name in Server.conf.  I have checked metrics.log and data is flowing.  i  don't see anywhere I can edit this 'Report/Form' to obtain drop-down data.  Where else can I see where this drop-down is being populated from? Thank you, Greg