All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

We have used CentOS on some of our splunk servers and now that it has End of Life on December 31, 2021. We are looking to rebuild the servers with a new OS. The new standard from our linux team is Ro... See more...
We have used CentOS on some of our splunk servers and now that it has End of Life on December 31, 2021. We are looking to rebuild the servers with a new OS. The new standard from our linux team is Rocky.  Since Rocky is a relatively new distro we do not have any experience running splunk on this OS. Is there anyone out there that has that experience and can share?
Hi, I have events that have more than 20 lines of data. In the Field extraction menu only the first 20 lines are shown. This prohibits me from extracting fields that are beyond the 20th line. Is the... See more...
Hi, I have events that have more than 20 lines of data. In the Field extraction menu only the first 20 lines are shown. This prohibits me from extracting fields that are beyond the 20th line. Is there a way to show more lines? Can I get the required fileds in another way? My fields all have the same format like: $_NAME: VALUE. There are about 1200 different values in one event. Can I auto extract all fields from my events? (they all have the same sourcetype)
In WildFly application servers there is the /metrics REST endpoint. What is the best way to get the data provided from WildFly /metrics into splunk? What we have found is the "REST API Modular Inpu... See more...
In WildFly application servers there is the /metrics REST endpoint. What is the best way to get the data provided from WildFly /metrics into splunk? What we have found is the "REST API Modular Input" App (https://splunkbase.splunk.com/app/1546/), but this will cost 99$ per connection, and we have 200+ different WildFly Servers  - comparing that to Prometheus + Grafana, which comes for free and has such an API "out of the box".  So it would be hard to justify this solution. But as we already have a splunk environment we would like to keep that, so there must be a better solution for this, which costs less than this REST API Modular Input API". We have a Splunk forwarder on all WildFly servers, so it should be possible to grab the data somehow and push it to splunk. We have also seen the "Splunk Add-on for Java Management Extensions" AddOn, but this seems like re-inventing the wheel, as the data necessary for monitoring is already provided in the /metrics endpoint. And opening a production server for remote JMX access seems odd - as JMX can do anything to that server, not just performance monitoring, which feels like a severe security breach, and JMX security and Beans change from release to release. Who can help?
Hello Splunk Community,  I have created a dashboard with 3 dropdowns; Select System, Select Environment, Select Period (time).  Note: each system has named their environments the same i.e. Producti... See more...
Hello Splunk Community,  I have created a dashboard with 3 dropdowns; Select System, Select Environment, Select Period (time).  Note: each system has named their environments the same i.e. Production, UAT etc. I seem to be having a problem when i have already selected all dropdowns and metrics load, then i change the System dropdown, the Environment dropdown seems to update but there is 1 duplicate (i.e. in pervious search I selected 'Production' environment and now I have 2 Production environments assuming one is for each system).  Can someone assist me to figure out how to clear the environment dropdown when i change the system? I have tried to play around with the settings within the UI but no luck.  Is there something i need to change in my source code?  <fieldset submitButton="false" autoRun="false"> <input type="dropdown" token="CMDB_CI_Name" searchWhenChanged="true"> <label>Select IT Services</label> <fieldForLabel>CMDB_CI_Name</fieldForLabel> <fieldForValue>CMDB_CI_Name</fieldForValue> <search> <query>|inputlookup list.csv | fields CMDB_CI_Name | dedup CMDB_CI_Name</query> <earliest>-4h@m</earliest> <latest>now</latest> </search> </input> <input type="dropdown" token="env" searchWhenChanged="true"> <label>Select Environment</label> <change> <set token="tokEnvironment">$label$</set> </change> <fieldForLabel>Env_Purpose</fieldForLabel> <fieldForValue>Env_Infra</fieldForValue> <search> <query>|inputlookup list.csv | search CMDB_CI_Name=$CMDB_CI_Name$ | fields Env_Purpose, Env_Infra | dedup Env_Purpose, Env_Infra</query> </search> </input> <input type="time" token="time_token" searchWhenChanged="true"> <label>Time Period</label> <default> <earliest>-7d@h</earliest> <latest>now</latest> </default>
Hi,   I am trying to construct a report where when the Response time is over a % and how many minutes it has been over in a time range. I can do the stuff part but I am stuck on calculating how man... See more...
Hi,   I am trying to construct a report where when the Response time is over a % and how many minutes it has been over in a time range. I can do the stuff part but I am stuck on calculating how many minutes it has been over a percentage in a time frame.   Any help would be greatly appreciated.   Thanks,   Joe
Hi, 1) Which integration method will be used when data in onboarded with following steps  a) HEC method b)TCP method c) DB connect ? 2) How many API scripts that we can able to run in HF? If po... See more...
Hi, 1) Which integration method will be used when data in onboarded with following steps  a) HEC method b)TCP method c) DB connect ? 2) How many API scripts that we can able to run in HF? If possible can you please suggest any documentation and also the uses for the above methods individually?  
Hi All,   and @dmarling and @efavreau  I have been using the Paychex Cover Your Assets techniques from the 2019 Splunk Conference to export user config and load into Splunk Cloud.  I have used it f... See more...
Hi All,   and @dmarling and @efavreau  I have been using the Paychex Cover Your Assets techniques from the 2019 Splunk Conference to export user config and load into Splunk Cloud.  I have used it for a few sites but with the latest site I have a problem where Alerts defined with Time Range set to Custom have loaded into cloud with Time Range set to "All Time". This will obviously cause a performance problem especially as many alerts run frequesntkly and usually the Time Range is set to 5 minutes. Has anyone else noticed these settings being lost in the Paychex process?   For example this:   has become:   I have checked and can see the that the first Paycheck SPL worked fineas I can find these fields in the resulting csv: But the second Paychex SPL that assembles the CreateCurl has dropped these fields:   curl -k -H "Authorization: Splunk XXXXXXXXXXXXXXXX/servicesNS/nobody/search/saved/searches -d name="AWS ASG ELB Activity" -d search="%28index%3Daws%20OR%20index%3Dclick%29%20sourcetype%3D%22aws%3Acloudtrail%22%20%20userAgent%3D%22autoscaling%2Eamazonaws%2Ecom%22%20accountName%3DProduction%20%20%28eventName%3D%20%20%22DeregisterInstancesFromLoadBalancer%22%20OR%20%20eventName%3D%20%22RegisterInstancesWithLoadBalancer%22%29%7C%20spath%20path%3DrequestParameters%2Einstances%7B%7D%2EinstanceId%20output%3Dinstances%20%20%20%7C%20eval%20slack%5Fmessage%20%3D%20strftime%28%5Ftime%2C%20%22%20%25Y%2D%25m%2D%25d%20%25H%3A%25M%3A%25S%22%29%20%2E%20%22%20autoscaling%20%22%7Ceval%20slack%5Fmessage%20%3D%20slack%5Fmessage%20%2E%20if%28eventName%3D%22RegisterInstancesWithLoadBalancer%22%2C%20%22%20added%20%22%2C%20%22%20removed%20%22%29%20%7Ceval%20instance%5Ftotal%3Dmvcount%28%09%0A%27responseElements%2Einstances%7B%7D%2EinstanceId%27%29%7Ceval%20instance%5Fcount%3Dmvcount%28instances%29%20%7C%20eval%20instance%5Flist%3Dmvjoin%28instances%2C%22%3B%22%29%20%20%7C%20eval%20slack%5Fmessage%20%3D%20slack%5Fmessage%20%2E%20instance%5Fcount%20%2E%20if%28instance%5Fcount%3D1%2C%20%22%20instance%22%2C%20%22%20instances%22%29%20%2E%20if%28eventName%3D%22RegisterInstancesWithLoadBalancer%22%2C%20%22%20to%22%2C%20%22%20from%22%29%20%2E%20%22%20load%20balancer%20%22%20%2E%20%27requestParameters%2EloadBalancerName%27%20%2E%20%22%2C%20new%20instance%20count%20is%20%22%20%2E%20instance%5Ftotal%20%2E%20%22%20%28%22%20%2E%20instance%5Flist%20%2E%22%29%22%20%7C%20table%20%20slack%5Fmessage%20%7Csort%20%2Dslack%5Fmessage" -d description="" -d auto_summarize.cron_schedule="%2A%2F10%20%2A%20%2A%20%2A%20%2A" -d cron_schedule="%2A%2F5%20%2A%20%2A%20%2A%20%2A" -d is_scheduled="1" -d schedule_window="0" -d action.email="0" -d action.email.sendresults="" -d action.email.to="" -d action.keyindicator.invert="0" -d action.makestreams.param.verbose="0" -d action.notable.param.verbose="0" -d action.populate_lookup="0" -d action.risk.param.verbose="0" -d action.rss="0" -d action.script="0" -d action.slack="1" -d action.slack.param.channel="%23digital%2Dprod%2Daudit" -d action.slack.param.message="%24result%2Eslack%5Fmessage%24" -d action.summary_index="0" -d action.summary_index.force_realtime_schedule="0" -d actions="slack" -d alert.digest_mode="0" -d alert.expires="24h" -d alert.managedBy="" -d alert.severity="3" -d alert.suppress="0" -d alert.suppress.fields="" -d alert.suppress.group_name="" -d alert.suppress.period="" -d alert.track="0" -d alert_comparator="greater%20than" -d alert_condition="" -d alert_threshold="0" -d alert_type="number%20of%20events" -d display.events.fields="%5B%22host%22%2C%22source%22%2C%22sourcetype%22%5D" -d display.events.list.drilldown="full" -d display.events.list.wrap="1" -d display.events.maxLines="5" -d display.events.raw.drilldown="full" -d display.events.rowNumbers="0" -d display.events.table.drilldown="1" -d display.events.table.wrap="1" -d display.events.type="list" -d display.general.enablePreview="1" -d display.general.migratedFromViewState="0" -d display.general.timeRangePicker.show="1" -d display.general.type="statistics" -d display.page.search.mode="verbose" -d display.page.search.patterns.sensitivity="0%2E8" -d display.page.search.showFields="1" -d display.page.search.tab="statistics" -d display.page.search.timeline.format="compact" -d display.page.search.timeline.scale="linear" -d display.statistics.drilldown="cell" -d display.statistics.overlay="none" -d display.statistics.percentagesRow="0" -d display.statistics.rowNumbers="0" -d display.statistics.show="1" -d display.statistics.totalsRow="0" -d display.statistics.wrap="1" -d display.visualizations.chartHeight="300" -d display.visualizations.charting.axisLabelsX.majorLabelStyle.overflowMode="ellipsisNone" -d display.visualizations.charting.axisLabelsX.majorLabelStyle.rotation="0" -d display.visualizations.charting.axisLabelsX.majorUnit="" -d display.visualizations.charting.axisLabelsY.majorUnit="" -d display.visualizations.charting.axisLabelsY2.majorUnit="" -d display.visualizations.charting.axisTitleX.text="" -d display.visualizations.charting.axisTitleX.visibility="visible" -d display.visualizations.charting.axisTitleY.text="" -d display.visualizations.charting.axisTitleY.visibility="visible" -d display.visualizations.charting.axisTitleY2.text="" -d display.visualizations.charting.axisTitleY2.visibility="visible" -d display.visualizations.charting.axisX.abbreviation="none" -d display.visualizations.charting.axisX.maximumNumber="" -d display.visualizations.charting.axisX.minimumNumber="" -d display.visualizations.charting.axisX.scale="linear" -d display.visualizations.charting.axisY.abbreviation="none" -d display.visualizations.charting.axisY.maximumNumber="" -d display.visualizations.charting.axisY.minimumNumber="" -d display.visualizations.charting.axisY.scale="linear" -d display.visualizations.charting.axisY2.abbreviation="none" -d display.visualizations.charting.axisY2.enabled="0" -d display.visualizations.charting.axisY2.maximumNumber="" -d display.visualizations.charting.axisY2.minimumNumber="" -d display.visualizations.charting.axisY2.scale="inherit" -d display.visualizations.charting.chart="column" -d display.visualizations.charting.chart.bubbleMaximumSize="50" -d display.visualizations.charting.chart.bubbleMinimumSize="10" -d display.visualizations.charting.chart.bubbleSizeBy="area" -d display.visualizations.charting.chart.nullValueMode="gaps" -d display.visualizations.charting.chart.overlayFields="" -d display.visualizations.charting.chart.rangeValues="" -d display.visualizations.charting.chart.showDataLabels="none" -d display.visualizations.charting.chart.sliceCollapsingThreshold="0%2E01" -d display.visualizations.charting.chart.stackMode="default" -d display.visualizations.charting.chart.style="shiny" -d display.visualizations.charting.drilldown="all" -d display.visualizations.charting.fieldColors="" -d display.visualizations.charting.fieldDashStyles="" -d display.visualizations.charting.gaugeColors="" -d display.visualizations.charting.layout.splitSeries="0" -d display.visualizations.charting.layout.splitSeries.allowIndependentYRanges="0" -d display.visualizations.charting.legend.labelStyle.overflowMode="ellipsisMiddle" -d display.visualizations.charting.legend.mode="standard" -d display.visualizations.charting.legend.placement="right" -d display.visualizations.charting.lineWidth="2" -d display.visualizations.custom.drilldown="all" -d display.visualizations.custom.height="" -d display.visualizations.custom.type="" -d display.visualizations.mapHeight="400" -d display.visualizations.mapping.choroplethLayer.colorBins="5" -d display.visualizations.mapping.choroplethLayer.colorMode="auto" -d display.visualizations.mapping.choroplethLayer.maximumColor="0xaf575a" -d display.visualizations.mapping.choroplethLayer.minimumColor="0x62b3b2" -d display.visualizations.mapping.choroplethLayer.neutralPoint="0" -d display.visualizations.mapping.choroplethLayer.shapeOpacity="0%2E75" -d display.visualizations.mapping.choroplethLayer.showBorder="1" -d display.visualizations.mapping.data.maxClusters="100" -d display.visualizations.mapping.drilldown="all" -d display.visualizations.mapping.legend.placement="bottomright" -d display.visualizations.mapping.map.center="%280%2C0%29" -d display.visualizations.mapping.map.panning="1" -d display.visualizations.mapping.map.scrollZoom="0" -d display.visualizations.mapping.map.zoom="2" -d display.visualizations.mapping.markerLayer.markerMaxSize="50" -d display.visualizations.mapping.markerLayer.markerMinSize="10" -d display.visualizations.mapping.markerLayer.markerOpacity="0%2E8" -d display.visualizations.mapping.showTiles="1" -d display.visualizations.mapping.tileLayer.maxZoom="7" -d display.visualizations.mapping.tileLayer.minZoom="0" -d display.visualizations.mapping.tileLayer.tileOpacity="1" -d display.visualizations.mapping.tileLayer.url="" -d display.visualizations.mapping.type="marker" -d display.visualizations.show="1" -d display.visualizations.singlevalue.afterLabel="" -d display.visualizations.singlevalue.beforeLabel="" -d display.visualizations.singlevalue.colorBy="value" -d display.visualizations.singlevalue.colorMode="none" -d display.visualizations.singlevalue.drilldown="none" -d display.visualizations.singlevalue.numberPrecision="0" -d display.visualizations.singlevalue.rangeColors="%5B%220x53a051%22%2C%20%220x0877a6%22%2C%20%220xf8be34%22%2C%20%220xf1813f%22%2C%20%220xdc4e41%22%5D" -d display.visualizations.singlevalue.rangeValues="%5B0%2C30%2C70%2C100%5D" -d display.visualizations.singlevalue.showSparkline="1" -d display.visualizations.singlevalue.showTrendIndicator="1" -d display.visualizations.singlevalue.trendColorInterpretation="standard" -d display.visualizations.singlevalue.trendDisplayMode="absolute" -d display.visualizations.singlevalue.trendInterval="" -d display.visualizations.singlevalue.underLabel="" -d display.visualizations.singlevalue.unit="" -d display.visualizations.singlevalue.unitPosition="after" -d display.visualizations.singlevalue.useColors="0" -d display.visualizations.singlevalue.useThousandSeparators="1" -d display.visualizations.singlevalueHeight="115" -d display.visualizations.trellis.enabled="0" -d display.visualizations.trellis.scales.shared="1" -d display.visualizations.trellis.size="medium" -d display.visualizations.trellis.splitBy="" -d display.visualizations.type="charting"   I really like this process and am keen to work out a solution but am asking in case someone else has already resolved it. Thanks heaps.
Hello all, I have a saved search that I want to run once every Sunday at 00:00. I have added in the query to pick the events for the last 7 days as: earliest=-7d@d latest=@m. I have also scheduled ... See more...
Hello all, I have a saved search that I want to run once every Sunday at 00:00. I have added in the query to pick the events for the last 7 days as: earliest=-7d@d latest=@m. I have also scheduled it to run every week on Sunday at 00:00 and time range as Last 7 Days.   When I run the saved search manually it is working as expected and also when I run this by changing the schedule to run every 5 mins for last 7 days range it is able to index the data. However, when I schedule it to run once every week, even though the search is running the data is not being indexed to tier3. When I checked the job manager, the run was successfully completed but no data was pushed to tier3.   Can you please help on this.
Splunk 8.x.xを使用していますが、サーチを実行中にsplunkdプロセスが落ちてしまいました。調査をしてみたところ、下記のことがわかりました。 * サーチプロセスではなく、splunkd自体がセグメンテーション違反(SIGSEGV)で落ちている。 * メモリーも枯渇しておらず、oom-killerが発生したという形跡がmessagesファイルから見られない。 * クラッシュログが... See more...
Splunk 8.x.xを使用していますが、サーチを実行中にsplunkdプロセスが落ちてしまいました。調査をしてみたところ、下記のことがわかりました。 * サーチプロセスではなく、splunkd自体がセグメンテーション違反(SIGSEGV)で落ちている。 * メモリーも枯渇しておらず、oom-killerが発生したという形跡がmessagesファイルから見られない。 * クラッシュログがSplunkのログディレクトリ配下に生成されていない。 * コマンドが大量にある、長いサーチ文を実行していたようだ。 なぜsplunkdプロセスが急に落ちてしまったのでしょうか?対処する方法はありますか?
Did anyone implemented Splunk Federated Search feature, if yes can someone please help us with the below issue. We have set up federated search across two splunk cloud instances and developed an ale... See more...
Did anyone implemented Splunk Federated Search feature, if yes can someone please help us with the below issue. We have set up federated search across two splunk cloud instances and developed an alert on our instance 1 SH, when ever the alert condition meets, alerts are not getting triggered and in the job manager we are seeing 0 events for that timeframe, when we open the search and manually hit the search, we are seeing events.   We are seeing the another issue even when we are trying to write the data to a lookup file from other instance using the scheduled search, as there is a data loss while writing to a lookup file.
I have splunk queries that generates 2 different tables having similar fields (METHOD, URI, COUNT). I wanted to do a diff between them based on URI and also the count. Eg: tableA METHOD URI C... See more...
I have splunk queries that generates 2 different tables having similar fields (METHOD, URI, COUNT). I wanted to do a diff between them based on URI and also the count. Eg: tableA METHOD URI COUNT GET 1/0/foo 3 PUT 1/0/bar  11   tableB METHOD URI COUNT GET 1/0/foo 2 PUT 1/0/bar 11 PUT 1/0/buzz 1  Is there a way to do difference between 2 tables based on METHOD+URI and COUNT? Result should be something like  METHOD URI COUNT GET 1/0/foo 1  PUT 1/0/buzz 1 
Hello, I have csv source files without headers; sample events from that file and what PROPS Conf.  I wrote are given below. Values in the first column can be used as time stamps. How would I write P... See more...
Hello, I have csv source files without headers; sample events from that file and what PROPS Conf.  I wrote are given below. Values in the first column can be used as time stamps. How would I write PROPS configuration file  for that csv source file, since I am getting some error messages in timestamps and some extra  columns at the beginning of events.  Any help will be highly appreciated. Thank you so much, Here is what I wrote: [ csv ] SHOULD_LINEMERGE=false NO_BINARY_CHECK=true INDEXED_EXTRACTIONS=csv TIME_PREFIX=^ TIME_FORMAT=%Y-%m-%d %H:%M:%S.%6Q FIELD_NAMES=f1,f2,f3,f4,f5,f6,f7,f8   5 Sample Events:    
Hello -  I'm working with a dashboard with a time picker with the token value of $time$.  This time is currently set to the value of another field using: | eval _time = _mytime I have a time... See more...
Hello -  I'm working with a dashboard with a time picker with the token value of $time$.  This time is currently set to the value of another field using: | eval _time = _mytime I have a timechart in a dashboard with the following values: Results in :   | timechart count limit=24 useother=f usenull=f 2021-10-26 1 2021-10-27 417 2021-10-28 36 2021-10-29 15 2021-10-30 21 2021-10-31 3 2021-11-01 10 2021-11-02 3 2021-11-03 1 When I click on a bar in the time chart, for example, the bar for 2021-10-27, I would like my time picker to change to that date, and redraw the dashboard for all the events for that day. I tried setting <drilldown> <set token="time_earliest">$earliest$</set> <set token="time_latest">$latest$</set> </drilldown> I have also tried <drilldown> <set token="fomr.time_earliest">$earliest$</set> <set token="form.time_latest">$latest$</set> </drilldown> Any suggestions?
Hi Folks, I have below requirement, I have a dashboard where I have timepicker with token and and bar chart panel. so lets see if i choose 15days from timepicker it shows data in15 bars (Oct 2... See more...
Hi Folks, I have below requirement, I have a dashboard where I have timepicker with token and and bar chart panel. so lets see if i choose 15days from timepicker it shows data in15 bars (Oct 20th to Nov 3rd). so now lets suppose I click on a bar ($click.value$) it takes me to next panel where I want to see the data from $click.value$ to 15 PREVIOUS days. e.g if i click on bar of Oct 20th, the next panel should show me data for past 15 days (6th Oct to 20th Oct). can someone help me with setting up earliest and latest time through tokens for above scenario?
Hello, I use Splunk to look at Office 365 email....but I don't see header info relating to TLS which we are looking for data on.  How do I pull this info into Splunk?  Is it in a different log?     ... See more...
Hello, I use Splunk to look at Office 365 email....but I don't see header info relating to TLS which we are looking for data on.  How do I pull this info into Splunk?  Is it in a different log?     Thanks
When doing a hunting exercise on a ethical hack system, I'm looking for an efficient way to find the unique breadcrumbs on this system compared to all the other systems in same timewindow. Suppose t... See more...
When doing a hunting exercise on a ethical hack system, I'm looking for an efficient way to find the unique breadcrumbs on this system compared to all the other systems in same timewindow. Suppose the EH system 1 has processes A,B,C,D whereas all the systems have processes A,C,D,E,F,G,H.... The result I'm looking for is process=B which was only found on system 1. Tried with subsearches / join etc but seem to run in circles.  All help is much appreciated. Since full population (except system 1) can be a very large dataset, it's important to make the SPL as efficient as possible.   
Hello, Can it be possible for UF to remove/delete files once it's been pushed to the indexer? How would I do that? Thank you...any help will be highly appreciated.    
Hi,    I am a have content like below and i would like to extract git url from it. Please suggest me how to do it using rex?   Content:   proj_url\x1B[0;m=https://my.test.net/sample/test.git test... See more...
Hi,    I am a have content like below and i would like to extract git url from it. Please suggest me how to do it using rex?   Content:   proj_url\x1B[0;m=https://my.test.net/sample/test.git test\x1B[0;m=abcd. Output should be: https://my.test.net/sample/test.git   Any help is appreciated.   Thanks.
We've recently started a Splunk Cloud instance, and are attempting to send data to it locally so we have all the steps ready to push to servers. I've followed the installation instructions pretty muc... See more...
We've recently started a Splunk Cloud instance, and are attempting to send data to it locally so we have all the steps ready to push to servers. I've followed the installation instructions pretty much everywhere a few times and still have no solution. Example of the steps taken can be found here: https://docs.splunk.com/Documentation/SplunkCloud/8.2.2109/Admin/UnixGDI with the exception that I can install through a .dmg and my universal forwarder lives at /Applications/SplunkForwarder. I've been digging around to try to see what could've gone wrong, I haven't messed with any of the configuration files yet, just added the app with the credentials file and added a monitor to the log file. I can tail the log file locally and things print out to it fine, and the file mapping is correct. The only thing I've noticed is that if I go to $SPLUNK_HOME/etc/system/local there's no `inputs.conf` file, but I'm not sure that's even required. Does anyone have any ideas on where to even start to hunt down the issue?   Also, if I run ./bin/splunk list forward-server the forward successfully shows up under active
Hey splunksters, I don't do much powershelling, but I have a big list of windows azure servers that need to have the universal forwarder installed  Anyone have a poweshell script to install a a for... See more...
Hey splunksters, I don't do much powershelling, but I have a big list of windows azure servers that need to have the universal forwarder installed  Anyone have a poweshell script to install a a forwarder on multiple remote (azure) windows machines? Preferably the script will need to check to see if a forwarder is already installed and then skip it if it is.    Thanks